6 March 2024

6 March 2024

Frode Hegland: Yeah, for the recording and for the record. What can I. Andrew. That’s just. Yeah, it’s it’s really nice. I have real problems with my movement and and interaction, but those are the kind of things we need to test and tweak. What you’ve done here is really good. Hi, Adam. I’m sure it needs.

Andrew Thompson: Lots of tweaking.

Frode Hegland: Yeah, yeah, yeah, but as it should be, it’s it’s really lovely.

Andrew Thompson: When you say you have trouble selecting, do you mean like the drag is too slow for you? You can’t quite get it where you want fast enough.

Frode Hegland: I think it was also the wrong hand. But the biggest problem I have is I can’t use my palm to get it up. It’s a bit low, so, you know, I’ll test it like this.

Andrew Thompson: The swiping. The swiping you’re talking about. Oh, that didn’t change since last time. Interesting.

Frode Hegland: Hang on. I’m just going to send that the link here. Get a public link. Keep posting that a few times. Her at the choir. Awesome.

Dene Grigar: Good morning. Adam.

Frode Hegland: I’m not sure if you can hear you. He’s at the choir. How funny is that?

Rob Swigart: Morning.

Frode Hegland: What? Host who just said morning. That was your voice. That did not sound like yours. That was the most bizarre. Okay. While we’re waiting for the room to fill in a bit. As I mentioned to Danny before, I. I sold my liquor today to help pay for my programing, but Don’t worry, I do have my Lumix and I as a very good exchange deal. I picked up a secondhand 85 1.4 sigma lens. The thing is insane. It is so good, sharp, etc., etc. but what’s relevant to us in a way is the autofocus. Of course, now in modern cameras is AI based. So, you know, I’m walking down the street and I point the camera roughly where there is a person. It’ll get that person tack sharp instantly. So that’s one of the unexpected. It’s like in the olden days, to get that, you have to sit and do this and that, and now it’s just boom.

Rob Swigart: The unexpected for the person. Well, yeah.

Dene Grigar: Just saying.

Frode Hegland: Well, if you’re going to be street photographed, you might as well be sharp, right?

Dene Grigar: Well, you know as a woman, it would be pretty freaky to see some man taking pictures of me if I. That I didn’t know. It’s like, what are. Who are you? What are you doing with my photographs?

Frode Hegland: The whole notion of that is absolutely important to keep in mind.

Dene Grigar: The. Hello, Peter. Good morning.

Peter Wasilko: Good morning. From brunch and rainy New York.

Dene Grigar: I’m still knocking down my morning coffee. I was just on the East Coast for the funeral, and I can tell you that I prefer East Coast time.

Rob Swigart: Oh, well.

Dene Grigar: Behind three hours when I wake up.

Frode Hegland: So I’m just going to check on Mark. I’m not sure if he’s joining us.

Dene Grigar: Andrew is hardly. You hardly hear? You didn’t eat anything.

Andrew Thompson: I quite enjoyed that. That shrimp gumbo that you made, that was good. I had that a couple times.

Dene Grigar: Good. I’m glad you liked it. Did you Did you have any coffee?

Andrew Thompson: I didn’t have coffee. I had some wine.

Dene Grigar: Okay, good. What did you think of the wine I left you?

Andrew Thompson: I thought it was good. Yeah. I hadn’t had that one before. I don’t know what it was. It was in a glass jar, but.

Dene Grigar: Yeah, I had Yeah, I had it in a decanter. Yeah. It’s delicious. It’s in Italian. Got a few years on it. So it was really good. Nice.

Andrew Thompson: Yeah, yeah. Pulled it out on Friday when I was done with work, I was like, just going to relax.

Dene Grigar: Yeah, by the fire. Fruitbat said he really misses you. He said I want that boy back. Yeah.

Andrew Thompson: He was nice. He was. He was very demanding, but he was very sweet. So he, like, made up for it. You know, he really wanted to go outside when I was, like, packing up on Sunday. And I was like, I can’t let you out because I’m going to be gone. Yeah, I bet he wanted to just book it as soon as you got there.

Dene Grigar: Oh man, I got in. He was like. Okay. Hi, Femi. I want to go outside.

Andrew Thompson: Yes. That’s right.

Dene Grigar: Yeah. Think I’ll go outside. So he’s been pretty smoochy since I’ve been home, so. Hey, Mark. Good to see you.

Mark Anderson: Sorry, I was I was trying out the the the new work in the headset.

Frode Hegland: It’s amazing to be able to do that, right. It’s no longer just ideas. It’s fantastic.

Dene Grigar: Hey, Rob. Swigart.

Rob Swigart: So. Hey.

Dene Grigar: Are you coming up today to campus, I hope?

Andrew Thompson: Me I could I’m going to come to your place for sure. Give you the remote.

Rob Swigart: Well, I would if I could.

Andrew Thompson: Yeah. So if you just want me there for the remote, I’m planning on dropping that off around, like, 615 at your place. Otherwise, if you want me to just be there to work and stuff like that, I can come in earlier. Either. Work?

Dene Grigar: Yeah. I was going to say if you just came in for a couple of hours and work with me before I go to class and then go home. That way you’re home, or are you going to go see Simone? Is that why you’re coming at six?

Andrew Thompson: No, I’ve got like, music practice with a with a crew.

Rob Swigart: Okay. Okay.

Dene Grigar: Well, let me think about it. We’ll talk later.

Andrew Thompson: Yeah, we can discuss in our meeting, but I’m going to give it to you today at some point, no matter what.

Rob Swigart: Yeah. Okay.

Frode Hegland: You know, it’s funny, I was joking there that I would come to campus if I could. So I have this Starbucks mug. Not that it’s my favorite brand at the moment, but I bought this while I was with you, Rob and the Valley, and I brought it with me to the lab with Deeney every day, and here I’m sitting it with it. So this physical artifact is really interesting like that. It kind of places me more with you guys, which is Kind of kind of cool. Anyway, I think we can start. Randall will be late today. He did tell us ahead of time. I have added a Link to our dear old. Where is it now? Agenda. So the first little item is that author for vision seems to be happening. It’s much more advanced than I expected for our test period, which is nice. I was going to invite Danny and Rob to to test it, but the testing thing is a bit weird, so I may just release it quietly, tell you guys about it, get your feedback, and then we update it. But it’s a real thing, and that’s really weird. And that’s why I sold my Laika today. I must say that many times because I demand brownie points for my dedication. Right? Second point is, are there any other announcements? Working in the Vision Pro in various coffee shops. As is just not shocking to people here, it’s just normal.

Frode Hegland: But I found that it isn’t as useful as I expected because there isn’t as much work software, which is why I’m excited by author, at least for myself. But I did give Apple a lot more money by buying the trackpad and keyboard, only for that, because last time I was trying to talk to Dini, it took forever to get the keyboard back. You know it. It works well on either, but switching is a pain. And surprisingly, I don’t know if you guys know that if you use the trackpad in the vision, it’s not just in the rectangle of word processing or whatever you can. It produces a little floating dot and you can go anywhere with it. So you can be using a trackpad and you can go in between things too. So if you don’t want to use your eyes, that’s a very bizarre but useful affordance. I think that’s worth noting. So I have added you, mark, to the agenda today, but only in a very small way. Look at your big eyes. And he goes first, she’s going to talk a little bit about case studies and then see what thoughts you have on it. Mark, because we need to decide in our community what we’re going to talk about and what we’re going to implement for. Right. We can’t make academics happy in every part of their lives. So with that big fanfare. Over to you, Danny.

Dene Grigar: Yeah. So one thing that we didn’t do in the grant was tell Sloan how we were going, what we’re going to produce, you know, exactly. For academic use. Right. So we didn’t do that. We didn’t define that. We wanted to leave that open so that we would have some time to talk about it and think about it. And now that I’ve gotten through all the grant writing that I’ve had to do, and I’m all the big grants are in, I can take a breath and step back and think about the case study. So what I posted into the slack channel a couple of days ago, the three most prominent activities of an academic, and I started with a research question, as I do with all things. And that is This? What is the effect of XR upon three common academic tasks? Preparing to write an academic article, write editing review articles for an academic journal, and three assessing a graduate seminar paper. Now, this isn’t everything we do. I’ve got a list over here of other things like preparing a keynote at a conference. Preparing a conference presentation. Evaluating tenure promotion cases evaluate evaluating faculty evaluate annual reviews. Right. So there’s a whole list of things that we do. But these three, everybody does it right, whether they’re tenured or non tenured, adjunct or whatever. Right. This is something we all share. Specifically, does XR enhance or improve the quality, effectiveness and efficiency of these three tasks and in what ways? This means I’ll have to define what I mean by quality and effectiveness and efficiency.

Dene Grigar: And I’ll do that in the paper. And we can have a discussion about that as well. I’m happy to take suggestions. I imagine this in three parts. Part one. Starting with the ways in which these activities are undertaken, given the constraints and affordances of the computer tools we have currently. So I will go through and talk about each one of these activities and the way in which I’m working currently. And you can’t see this right now, but the way I function is I’ve got my desktop, I have two desktops here. This one is the one I’m using for communication, you know zoom. It’s big, roomy, this, this this bigger desktop here is for design. So I can see a lot of real estate for designing things. Right over here is my laptop, which I carry with me to school. Everything that’s on these two computers or on that computer. And then I have my tablet here in the middle of all of this, or I’m on slack, generally monitoring my slack and email and then my phone. All of these talk to each other, and I could copy something on my laptop and I can drop it into the file I’ve got open in my tablet, and I can do the same thing for all of these five devices. And I’m moving around this console like this, right? This means that I’ve got when I’m writing, like the grants.

Dene Grigar: I’ve got the grant copy here. I’m on the web here looking up stuff. I’m. I’m on slack here. It means I’m. I’m moving through five spaces to put things here right now. If you calculate how much money that is in terms of technology. That’s I mean, this phone, this computer I just bought is like $3,000. This one was 2500. I mean, I’m already up to what way past the headset, you know. That’s not counting my iPad, which is another, what, $750? And my I, my laptop, which is another $2,800. Right. So I’ve got $10,000 worth of equipment here just to do a simple task. You know, when I’m reviewing the lead I’m on, I’m a Leonardo editor. Right. So the Leonardo Journal been doing this for 20 years. I, I’m sent seven review seven article seven reviews for books that I have to edit and get ready within three days. I’ve got him sitting here. I’m editing here. I’m on the on Google here looking at the making sure that the all the authors of these journals have got the right page number, the right ISBN numbers. So I’m across these different environments. I’m imagining that when we can master this project that we’re doing, I don’t have to have anything but a headset and maybe a keyboard and a trackpad, and I can do it all in one space. Right. And that would that would make my life a lot easier. And I think it’s a lot cheaper in the long run.

Dene Grigar: Headsets are going to come down anyway. I can do this. Do this in a 4500 headset as opposed to a $10,000 of equipment and everything can sit in one place. So anyway, I, I want to think about how I’m currently working to do these three tasks. Part two I want to think about how this could be done in XR. So the quality is good. The effectiveness is good. It’s efficient right now. This isn’t very this is more efficient, I can say, than a PC user that can’t copy and paste across desktops. Right? And then part three. What is gained by expanding the tools and environments to include XR and what is not yet possible? What I’m imagining isn’t able to happen yet, but that’s where we’re headed, right? And directions that XR can go to respond to this. So I think this will be very helpful. As I said, this is not everything that we do, but it’s three things that anybody in academe. Does. Right. And then there’s more. So imagining that this will be the article that I submit for the future of text book and give the paper at the conference. And it also, this will also give us a chance to think about what we want to build first. Andrews made a lot of progress. Thank you, Adam and Andrew, for the work you’re doing. So which of these three activities would be the one we should start with? If not these and what? And opened the floor.

Frode Hegland: But can you elaborate further on your preferences. Forget other academics. Just say Danny wants. Danny wants.

Dene Grigar: I think probably writing the academic article. I mean, I think that’s at the heart of what academics do. We write. We read and we write, and that encapsulates everything. I don’t want to say a book because books are too long to think about right now. An article would be a more doable object to work on. And then I think the next preference would be in the order I gave it, the editing and the review articles. We all are called on all the time to review things. In fact, I always say at the heart of what an academic does is always scrutinize, write Mark. You know, we’re we’re always evaluating everything, you know, from the way something looks to the way it sounds, the way it reads. We’re constantly. You know, on on it like a hint on a June bug, as we say in Texas. Picking it apart, trying to find the fallacies. And if we can’t. And once we do that and we can answer all the questions and fix all the problems, then we can bless it and let it go forward.

Frode Hegland: So basically you’re saying reading and writing in terms of writing.

Rob Swigart: And reading.

Dene Grigar: And writing and scrutinizing.

Rob Swigart: Yeah.

Frode Hegland: So so the reading scrutinizing thing Yeah. Okay. No. Perfect. Everyone else, please.

Mark Anderson: A drop in one morning. I because I think it sort of sits somewhere in deny set. But the thing that sprung to my mind when I was reading the list was peer review.

Rob Swigart: Yes.

Mark Anderson: Which is a slightly different mix of the same things. But it does it does make you it’s one of the things actually, where you have to read the wretched article, whether you want to or not, and how much of outrageous you. So in a sense, it’s one of it’s an odd task, and it does sort of make you do something rather out of your, your, out of your own style. But you have to tick all the boxes. You’ve got to look at the references, you’ve got to you’ve actually got to try and understand the narrative that’s given. I don’t think it necessarily needs to be another test case as such, lest we overload ourselves, but I did think it was, you know, I was just sort of thinking in terms of practice, the. One other thought that came from listening to this is in terms of the reading, I mean, the reading tends to be a deconstruction task. So it’s sort of the antithesis of the writing, which is trying to do the even more difficult thing of cramming it all back in this tiny box, which we call a paper. You have to fold things really small just to get them in the right place. And so in a sense that it seems that the, the reading slash deconstruction is, is possibly the more immediately tractable one because it’s, it’s taking the bit it’s probably easier to work out the bits you want out. Because in the writing there’s a, there’s a sort of bit going on in here as well which we have to think about how we’re sort of communicate out towards the space. But I think. They generally use this, they will use the same sort of blocks or the same interactions, the same things, just in a in a slightly different way.

Mark Anderson: And I’m conscious in the writing that we have the known issues about working out just how nicely and how well you know, we get on with all these sort of things where clearly there’s a bit of bedding down to do, which is entirely to be expected at this stage of the game. So that isn’t particularly you know, the fact of it’s a bit clunky now, I don’t think that’s that’s something that worries me unduly because we know where we are in the, on the continuum. But yeah, I sort of. Reading. It’s the thing. Well, it’s not reading, as Dean rightly says it. Basically, constantly assessing a good case in point, actually, was the article that I think was I think it was you kindly posted about thing in nature. What was really interesting as you peel back the layers, of course, it turned out to be not what it said. So there’s a slightly hyperbolic thing in, in nature saying, well, end of the world, you know, all the stuff is missing. When you read down into it, it’s slightly more. It’s still worrying, but it’s it’s worrying. It’s a different cut. It wasn’t the stuff was disappearing. I mean, I, I learned for the first time about Dark Archives, which is something I’d never heard about, which make eminent sense. But it’s it’s actually a really good case. Where to? To even get to the point where I thought I understood what the original puff piece said, I had to not only go into that article, I had to read 3 or 4 of its its immediate references.

Dene Grigar: Just think about in the books the Bob Coover end of books, the title. Right, you read the title and you thought, oh my gosh, he’s arguing that the books are gone because hypertext is going to take over. That’s not what he says at all.

Rob Swigart: Yeah, yeah, yeah.

Dene Grigar: But nobody really read that article deeply. And so that became the hyperbolic response to the article, right?

Mark Anderson: Yeah. And it’s it’s difficult because we’re all sort of prone to this. And, and I’m sure the other thing is, I’m sure it wasn’t really the intent of the original piece that was written, but it’s just the way you receive it, you go through. So it was interesting. It took quite a lot of reading just to get to the point. So, oh, right now I see what this person is on about. Interesting, genuine cause for concern, but not at all what it sounded like in the first place. Which I think is a good microcosm of, of this this this process of abstracting meaning from something that appears to be quite formal and structured on the outside, but actually leaves room for interpretation or indeed, misinterpretation.

Dene Grigar: And as I mentioned before, the Kate Hills book that How We Think talks about three types of reading. Right. And the notion of deep reading is something that we don’t practice so much anymore. Right. And our students aren’t inculcated to deep read, and forcing them to do so is a task. But certainly they can read quickly, right? They can do the the scanning part. But that also is valuable because we have to scan. I mean, I mentioned to all of you in graduate school, I read four books a week, one for each course I was taking. And then I there were, you know, 16 weeks. You do the math, right. How fast can you read? And I read every freaking book that I was given, but some more deeply than others, right? So I mean, we mix those tasks up. So so our do Rob, what do you think? I mean, you’re an academic. What what do you think should be the first thing we do of these three?

Rob Swigart: I once was an academic, and I remember I had written an essay about Star Trek for a friend in Italy. Who published it, and I sent it to a journal of science fiction in the US. And it was rejected because it was not academic. So I’m not an academic. I never was, and I never will be. I hated the whole. I hated the whole process and the whole you know, it just all seemed very petty and backbiting and and not very loose and free or willing to accept things that weren’t in their mental box. So, so I what I’m interested in here is taking the stuff that you guys are doing and using it for writing. And I have like Dini, I have a lot of stuff open. Mostly I do a lot of research. And I have a lot of stuff open. Or, you know, I have Google Earth in one window and web browser searches in other windows and I. Lip stuff and put it into my notes file, because I don’t know a better way to do that. It’s just another pages document. And then I have to find things in that.

Rob Swigart: And I think fraud. I told you that I would really love to be able to get all that stuff analyzed and sorted and arranged in a way that’s more accessible than it is now, because I have to search for terms in my notes. I’ve got 100 pages of notes now. So I’m, I’m thinking that the headset may turn out to be also like, a useful environment in which to do stuff, but I haven’t. You know, sort of slogged my way through getting you know, I bought a new keyboard. And I haven’t yet synced it up with the headset, but I will, because the keyboard that I was using with my laptop I had the same problem Froda did. It just takes forever to get it decommissioned and recommissioned and in the headset. So having two is is good. I did not get a trackpad because I read somebody thought it wasn’t a good idea. But if and I couldn’t get the trackpad to work properly in the headset, it would only work in certain regions and not in others. It it didn’t click. I don’t know, I’m doing something wrong.

Frode Hegland: Of course you are. And that’s the whole point. Apple wants us to say we’re doing something wrong so we buy another piece.

Rob Swigart: Yeah. That that that video that the guy did about living inside the working inside the headset. He had a keyboard, and the keyboard had some kind of software attachment to it that followed it around. And he he said that was a feature of the Magic Keyboard. Well, my Magic Keyboard didn’t have that, so I don’t know where that came from or how do you get to it? Do you have? But it looked useful because.

Frode Hegland: The keyboard you haven’t. Your old keyboard was not an Apple one, right?

Rob Swigart: It was okay.

Frode Hegland: All right.

Rob Swigart: Well let’s see I have I got the I got this one and put it on the computer and I decommissioned the previous one. This one has a fingerprint reader. So I can I can log on without reaching all the way up. All right. An extra six inches to get it anyway. Let’s see. So I was going to use this. I was going to use this keyboard with the headset.

Frode Hegland: But is that an apple one? Yeah.

Speaker7: I couldn’t exactly.

Frode Hegland: Oh, they only do the black with the numeric, which I think is a bit much, but. Okay. Yeah. No, that should work.

Rob Swigart: No, I have a white one.

Speaker7: Right. Okay. This is the one.

Rob Swigart: This is an.

Speaker7: Apple.

Rob Swigart: And it’s white and it’s cheaper.

Speaker7: Exactly.

Frode Hegland: Peter.

Peter Wasilko: Okay. I still think in my gut that for sit down VR, an XR having a 3D mouse would be incredibly useful. But of course, since Apple doesn’t have their own 3D mouse, they’re not pushing that and 3D connection, which makes that wonderful wireless 3D mouse that I use in blender isn’t supporting the Vision Pro. At least they haven’t talked about it yet, but hopefully at some point we’ll see that emerging. As far as what I do, a lot of the time I’m trying to draw connections between books from different disciplines. So I’m reading a lot of historical material and I will find a reference to a person. Then I will go and check indices of other books I might not have read. If I can get a copy of the index ahead of time and see whether that particular person or that event appears in the index of the other volume, that’s a deciding factor in what order I triage books that I don’t physically have in my hands yet, to decide what I want to pull in and be reading next. Now, as a result, there’s this large web of links between different items in my bibliography, and it would really help if there was some sort of a mechanism for visualizing that in XR to help recognize the connections. Now, the connections are in a couple of different dimensions.

Peter Wasilko: One dimension is overlaps of people, another dimension is overlaps of location. Another dimension is just broad concepts and trends. For instance, the New Towns movement and. City, beautiful kinds of things like that, which can appear across sources and across disciplines, because you’ll find layers of it appearing and architecture layers of that intersecting and urban planning. So tinderbox is the closest program I know now that lets me try to build those kinds of webs. And if there was something that could exploit the extra dimension that tinderbox doesn’t have. Since tinderbox is a 2D program, I could see it’s getting some really interesting visualizations that could help scholars to navigate through things. And then I also wish that it was possible to overlay the connections that other people have drawn. So we had some sort of metadata layer so that I could merge. You know, here’s Freud’s string of connections between books, here’s Mark Andersen’s and here’s spots where they overlap. So we might find a link emerging between a couple of sources, two hops out that wouldn’t have been intuitively obvious otherwise. That might suggest that we might want to bring those two sources in and look at them side by side now, because they do have so many strands that are connecting them.

Frode Hegland: All right. So.

Peter Wasilko: So I guess I’m thinking on the pre-writing the paper stage, I’m more in the deeper organizational and background side of the project as opposed to, okay, I’ve already decided what subset of resources I’m using, and I’m going to try to coalesce them into a paper on a narrow subtopic to get that published.

Dene Grigar: Can I can I respond to that? So, Peter that’s great. I think all of that is embedded in what I’m talking about. I’m not what I’m imagining. When I write a paper, I start with a research question. I mean, I have a question that I want to answer, right? And then I start looking around for Resources things to to trigger me in a various to see what other people have said about it. Right. What is what have other people said about this topic? During the literature review. I don’t even start to think about structure in my article until I do that kind of literature review. I find that it’s virgin territory. Nobody’s really written about this in this particular way. That’s great. So now I mark that part off and I go to the next part. What else can I what what are the resources that will take me in the direction I want to go? Right. Then I’ll pull those sources together. Then I start to think about how do I want to structure this, even writing this little blurb here for today. I wrote that on the plane. I mean, I write in my head a lot, right? I got it on paper.

Dene Grigar: The first round I reviewed it, changed it, revised, revised, revised. Until I have this particular structure. Right. I structured as part one, part two, part three. I haven’t even started to write or read yet right around this topic. So I do think that it’s all embedded in the. And very much in What I’m already presenting to you. Know that I. Before I wrote this though, we wrote and I did a lot of reading for the grant. So all that reading has happened, right? And now I’m interested in what else can we do and what’s the more practical part of this and all the videos that you’re posting, all the things we’ve got in our slack channel are very helpful. It’s not just books that I’m reading. I’m also looking at video clips. All kinds of media right in this day and time. So the article I’m writing is not quote unquote, so academic that it’s snobby about referencing video and games and things like that. But certainly it does include the very thing you’re talking about and that structure. Thank you. Mark.

Mark Anderson: Actually something that echoes for me. And you saying that is. I realize actually how sort of discursive my reading is. I mean, happy is the day when I can just sit down and actually read something, because much of the time I think of reading is actually bouncing, bouncing around. It’s probably made worse by actually having the web, because you can just go and look stuff up. And it’s not even as necessarily as formal as saying, right, I’m going to look at this reference. It’s yet partway through. So I say, wait a minute, I think I’ve read something about that before. And that now chimes with this thing, I need to go back and have another look at that or, or almost just see a snippet of it to bring it back to mind, to be able to re to, to re juxtapose it. And it’s something that to my mind naturally sits in a plastic environment. If I can use that term constructive environment, there’s such as we can have in XR and it’s, it’s, it’s not so much that it’s more n dimensional and three dimensional. So it’s less to do with the VR, it’s more to being able to do to okay, this sort of has a relation. It is like doing a jigsaw this bit, you know, these bits of vaguely together, those bits together, these are not those. But I don’t know quite how, but I know they’re all part of the whole it’s that sort of thing.

Mark Anderson: And just while I’m on the mic, I’ll show you as, as I’m on my mic. So let’s see, let’s see if this works. So that’s basically my, my primary workspace. So it’s a pair of 24 inch screens. My laptop is on a riser and basically normally just has my email in it. But essentially if I can if I to generalize, I’m essentially using one, one, one screen for input and one for output. So there’s one where I’m either sort of annotating or marking stuff up. And the other one is, is normally the consumption there. So or I might be, you know, so I might be reading something in one, I’m annotating in the other. And that’s sort of I, I find, which is why I try to not do serious work when I’ve only got a laptop screen, because I just can’t work like that. I can’t work productively like that because there’s not enough thinking space. Which is another interesting point about the XR, because obviously if I’m sitting on a train, I can’t I can’t all couple of screens on people might complain but, you know, if I had an XR space where essentially I could, I could have that wider, wider thinking space it sort of makes sense. So it does seem it’s like an idea whose time has come. And obviously we’re at a very early stage of it. Peter.

Peter Wasilko: Yeah, another really crazy thing is that I’ve found TV tropes to be an incredibly valuable resource. So even though it’s rooted in fiction, the individual articles and themes have links in the very bottom of the footnotes to real life examples, so sometimes I’ll be able to use that to access a piece of literature that I wouldn’t have found otherwise. And it’s all backdooring in through pop culture reference points that are linked to the other things, and they do a lot in pulling things together across domains and fields. So that makes it a really interesting resource. And again, even though it was intended for fandom to be organizing and finding things and all, it actually can be used for serious scholarship as a finding aid, which is totally bizarre. It’s like the last thing that you’d want. You probably wouldn’t want to cite it in your papers. And here’s how I drew this incredibly brilliant connection between these sources that I wouldn’t have stumbled on otherwise. But it’s something that we should have a look at at some point.

Dene Grigar: Well, can I mention something back to Peter about that before we move on? I’m not interested in talking about software, and probably not even a lot about hardware. I’m going to think about processes. So that it becomes more generalized, you know, so what are the processes involved in these things and how can those processes be improved. So. The process of reading, you know, the process of writing, process of waiting, assessing. So. So yeah, tinderbox is great, but I’m not going to talk about software I think in this. I’m hoping not to bother.

Speaker7: Yeah.

Frode Hegland: No, absolutely. That was most of my point, actually. So what an academic does in terms of the process is the point. Also, Peter, you I know it’s I’m saying it a lot, but you really need to go to the Apple Store and try the headset, because when you talk about a 3D mouse, absolutely no need for it yet. Hopefully we can develop more volumetric environments, but that doesn’t actually really happen yet. So even with the trackpad, even with this flat thing here, it is kind of incredible how it jumps between the spaces that you’re working on, and you can’t actually use that to pull things towards you. In a way, they’ve done an absolutely phenomenal job for this beginning stage, and I’m hoping that the process, as Denny is discussing, we can bring them more and more into the the third dimension. And soon we’re going to switch over to Andrew, who will be showing us what some of us have seen. But I see Mark has a deep thought.

Mark Anderson: But it’s a very well I don’t know if it’s a deep one. It’s a very quick one. And just following from what Danny was said about thinking about the process is that I know I probably appear to be thinking sort of sound as if I think about data a lot. It’s less data from a computer science perspective. It’s just that in order to do this, this process that I want to do, what what will a computer which can’t think like a human need to have because this is where in the knowledge tool space I keep things keep going wrong is a lot of people assume computers think like humans, and of course they don’t. And so you actually have to think, okay, what have I got to externalize? What have I got to capture? Which to me is intuitive and internal, but is absolutely fundamental to, you know, for instance, the kind of things we’re asking Andrew and Adam to do because if it’s not there, it can’t be used. And.

Speaker7: Yeah.

Frode Hegland: Metadata. Metadata, metadata, metadata. You know, so obviously I agree with that. Denny we should put this on the agenda for next week as well, right? I think this was not enough. This was just the introduction to to this. And anybody particularly you Mark, if you want to write something down for this, that’s really appreciated. If you want to sketch something for this that’s appreciated too.

Frode Hegland: Yeah. Then what do you think?

Dene Grigar: I think we can we can continue having a discussion. I think that it’s going to take me a while to write this out. So I’ll probably now restructure this and present maybe the first bit of structure. But I’m going to a funeral next week. Next week is spring break, and I’m going to a funeral. And another one, hopefully the last one for a few weeks. And when I come back, I’m hopefully have my mind clear of death. Right now I’m just carrying a lot of crap in my head. So. So let’s go to Andrew and Adam.

Andrew Thompson: Sorry.

Frode Hegland: While you talk, I’m just going to put the link in the chat so people can put can look at what you have built. Okay.

Andrew Thompson: Sorry. I’ve got the base camp link here as well, which I’ll throw in. If people don’t have a headset, they can just look at the video as well. So this just kind of built off of last week’s. I know one of the big complaints was how difficult it was to select things because of how just jittery the hands were. So I implemented a motion smoothing. It essentially takes the average position of Well, it just remembers where the hand’s been for a while and then averages all of it. So it tends to sort of drag behind a little bit. But it’s very smooth. And that drag starts to feel pretty natural, pretty fast, as long as you’re kind of expecting it. It is a little bit weird if you’re not expecting it to be smoothed. Right now it’s rather aggressive with the smoothing. It’s taking 30 different positions. So there’s a significant trail. And that might be fine. I was I was testing 20. Still had a bit of jitter to it, so I went up to 30. I can test it with any number we want. So if we want to set it further to To like 40. We can if we want to bring it down to like 10 or 15, we can. Of course, lower number means more jitter, but faster response. That being said, not everything is now on a delay. Just the the motion part. So the still, like, selecting and stuff is still immediate.

Andrew Thompson: Be infuriating if it wasn’t. So that’s that’s the motion smoothing part. The selector tool has undergone some changes. It no longer looks kind of like a pencil or a highlighter. And it’s now gesture based rather than just like a tap that’s always active. So right now it’s closing these three fingers. That activates kind of like a little it turns the selector tool on of sorts. All that is visually is just a little like blue object sort of appears right here between your thumb and index finger. And then it just kind of an indicator that it’s on. And then if you start to close your fingers, it’ll send out a little line so you can point to the main object, the text. And then if you tap it will play a little selection animation at the end. The text selection itself. I now broke up into a bunch of different lines. It’s no longer just a single block of text. Like, visually it looks the same, but it’s now broken into each line independently, which means we can highlight the different lines with like bolding or font size changes, which is what I currently did when you’re pointing at it. Troika text isn’t that customizable unless you break it into separate lines. So that’s why I did it that way. The the highlights from last time weren’t even part of troika. They were just like, they were basically just rectangle objects I was generating around it.

Andrew Thompson: So it’s kind of a pain just changing bits of text inside of a paragraph. That has to be a bunch of lines. Now which I have noticed a slight performance hit from that. Not when you’re just working, but when you try swiping, it seems to drop a little bit. Which is a little concerning, because the test I was on only has like 40 citations. So if we’re going to start getting lag with 40 We’re going to be a lot more limited on scope than we expected. But it doesn’t seem to lag things unless you’re swiping it, so maybe that’s a different thing we can work on. Maybe we won’t have it be, like, freely movable with the swiping. Maybe we’ll have it be more of a send it to this window kind of deal, and you just kind of like it snaps there. More of a teleport of sorts. We can discuss that as it starts to become an issue. It’s not currently an issue, but just we’re seeing problems ahead of time. And then the the bigger thing, at least under the hood, is the citation search. So if you select on one of those text lines in the citations it’ll bring up the first paragraph at finds that has that citation inside of it. So you get a whole you kind of see the context for where the citation shows up.

Andrew Thompson: And it’s really fast, just sort of pops up. But that’s not like cheated, like a lot of the tests have been so far. That’s an actual search. It runs through the document and finds the result. Which brings me to the last bit, which is it now supports in air quotes and like the roughest sense any document, it doesn’t support any document. It specifically supports the ACM hypertext documents in the HTML form that Mark has given us which have been really useful to work with. So thank you so much, Mark. I’ve included a link to the directory. I just put up the 2022 documents onto our server just for like us to test with. I know we’re not supposed to like, disseminate those, so we got to keep those local, but for testing and then you can check the base camp link. Mark has all of the numbers set up. So you can just you put the number in at the end of that URL and you can swap out the documents. Basically just put that number in before you load into the VR experience and, and tap the silver sphere. And it should load in that document citations and reference everything and link that way. Which is cool in my opinion. But it has a few downsides because I just got that search thing working last night. I was working on it to like almost 10:00 at night.

Andrew Thompson: So we’ve got it works for singular citations. I’m sure there’s edge cases that I have not tested. Specifically, if you have say, like a range of citations, if it’s like this is citations 33 through 35, it will have no idea what you’re doing. It’s looking specifically for, like, numbers. So it might get 33. I’m not sure it will not get 35. So it’s going to be a little bit iffy about that. I need to put an edge cases if that’s something that we expect to see frequently. But for now, tinker with it. Let me know. Issues. You find the the big thing that I want feedback on is specifically the motion smoothing. I’m sure people will talk about the other stuff and have a bunch of feedback there. But the motion smoothing can be very, very easily adjusted, and I expect that we will need to adjust it. So if people are like commonly voting that they want more smoothing or commonly voting, they want less, I can implement that. And then like maybe each week, I’ll shift the number by a little bit until we end up somewhere where most people are happy. Hey, we could even turn it into another setting on the menu in the end. Or just, like, changes how much smoothing you have. That actually might be a good idea, but we have to discuss the range for that as well. Okay. That’s it. I’ll stop talking now.

Speaker7: Yeah, very.

Frode Hegland: Very good, very good. Guys, what the rest of you think?

Mark Anderson: Great, I try to, I mean, I reason I was late is actually, I was just, I was just trying it out on the the Oculus. A quick thing that occurred to me just to follow up on, I hope you saw I posted into the slack a couple of graphs I made. Basically, I didn’t just did a binning of the number of references per note. I mean, the one I did last year was was way off the chart in this exception. And realistically, I think sort of probably somewhere between 30 and 60 is reasonable and in reality probably about 30 to 45. So when you say above 40, that’s not so bad as it might sound, because the number of the number of times you’re going to hit a hit, hit a list that’s longer than that is probably lowish. It doesn’t mean it’s not something you have to look at, but it’s not quite as it’s not quite as much of a roadblock as it might otherwise seem. And you also, interestingly, made the point about where you get blocked up citations. What I take from that is an interesting feedback. The other way is, okay, so one of the things going forward we want to do is that because we’re not just producing this to be read off the page for typographic Beauty and for saving ink by not printing so many numbers, is that the the the the hypertext version of this? Probably wants to expand all those things, in other words. So each of those, each of those citations, even if it’s a range of ten of them, needs somewhere, whether in the visual HTML render or in the metadata with it to be available, so that you can do the sort of things you need to do. So if you need, if you’ve got, you know references 13 through 15 and you need to get hold of 14 that it’s available to you somewhere in there. So I think that’s a really interesting observation.

Speaker7: Thanks.

Andrew Thompson: Yeah, it certainly would give us more things to work with if we have them broken down into every citation independently.

Mark Anderson: Well, the other thing is, of course, and we can experiment with that HTML, as you rightly noted. I mean, I gave the caveat that, you know, we need to be a bit careful where we put it, but if it’s on our own Basekamp server and essentially we’re not saving the world, I don’t think anyone is going to get worried. And the other thing is, by all means, you know, we can always make if you want to make a copy of one of those and pull some of those things out, we can start to experiment, because I think that sort of thing, those little threads will be tremendously useful going forward. I mean, it’s the kind of thing that no one would ever ask you about, and you wouldn’t trip over unless you unless you literally tripped over it. And I think and I think going forward to a sort of a presentation or a recording you know, data encoding that we’re talking about where there’s probably more, more deliberate metadata. So there’s there’s no reason there’s no reason to not record these things as separately addressable items, because that that joining is purely for the visual typographic layer.

Frode Hegland: So I still have interaction issues on the vision, on the on the on the quest. It was okay. Dini how was the interaction for you? Were you able to move the the column of references around?

Dene Grigar: No. When Andrew comes tonight, he’s going to help me.

Speaker7: Okay.

Frode Hegland: Well

Dene Grigar: See it, but I couldn’t.

Frode Hegland: Okay, good. So you’ll be able to test on the on the headset as well? That’s good. Yeah. No, that’s really good. The other thing is the the laser pointer and the blue dots. I would prefer two changes to that. And I wonder what everyone else thinks. One of them is I don’t want to see the blue dot or the laser. That’s the first thing. I don’t think it’s necessary to have a tool for selection. And. And also right now when I do this, because it might be because of the smoothing in previous movements, but it seems that where I point the actual selection is quite far down. So if that’s something that can be tweaked because it doesn’t have to be right on. Because remember, you built this really cool thing of pinch to select to show where the text is and the citation is in the document. So my feeling is that it’s sorry to be crude, but it’s like holding a handgun. You know, you don’t always point down the barrel. You know where you’re going to shoot. You know where your hand is, right? And it’s not where the finger is. So similarly here, I think that when you go into the pointing mode, the finger doesn’t have to go exactly to the the bar, because the whole point is you’ll adjust. And when you go up and down, you’re just going to get to it. So what do you all think about removing the laser and the blue dot and just having pointing seem like a gesture rather than a tool?

Dene Grigar: Can I test this tonight when he’s here and then give you my feedback?

Frode Hegland: Yeah, absolutely. Good idea.

Andrew Thompson: Senior. You’re not even able to load into the tests. Why? I wonder what’s wrong.

Dene Grigar: I know you’ll just show me tonight. I’m sure I’m doing something foolish.

Frode Hegland: Oh, you’re not able to load in. Okay. Have you already done the setting for Safari? To be able to do webXR.

Speaker7: Yeah.

Frode Hegland: So what happens when you load this page?

Dene Grigar: Nothing. Loads. Don’t worry about me. I’m going to have Andrew help me tonight. I’m sure I’m not. I’m sure I’m not doing something right.

Speaker7: Okay, okay.

Andrew Thompson: We’ll figure it out in. In person then.

Speaker7: Yeah.

Andrew Thompson: Yeah. So one clarification. For your laser list suggestion, because I think that’s you’re definitely on to something. No point in having extra clutter. On the screen. If we remove the little like object and the laser, there is no real indicator that you’re even selecting anything. I assume we’re keeping the little dot that shows up on the text itself to show you where you’re pointing, right? I think that’s necessary. There’s no way you can just do this intuitively if you can’t even see where it is entirely. Like, that’s like having a mouse that you never see the mouse and you’re just like, oh, well, I know where it is on the screen. Like, that’s not going to happen. But even with just that dot I think we could get away with it. It may start to feel strange that we don’t know it’s on or not. It would have to now be something explicitly taught to the user. But that’s okay. That’s just something we’ll have to keep in mind. Is that kind of in line with what you’re thinking?

Frode Hegland: I don’t notice that dot on the text. I didn’t even know there was one, because what I see is the whole line of text gets bold and big or dark and big. So that’s what I see. So don’t remove it on my account because I haven’t even seen it. That’s not what I notice.

Andrew Thompson: Right. It’s it’s really small. It’s just at the tip of the, the line. So I would suggest for this next test, remove the laser, remove the little object, keep the dot. And essentially what we’re doing is we’re recreating the quest controller at that point which, you know, works. Well, that’s pretty intuitive for people. So no point in being overly complicated if we don’t need to be.

Frode Hegland: Good point, Mark.

Mark Anderson: Cause I. Yeah, I sense I’m in what Andrew is saying that these sound like in a sense controls in the making. So one of the things I can sense is, for instance, you might like the onboarding, you might want the laser pointer. They’re just just long enough to understand what it means. And then and then you’ll then you will insert it in your mind’s eye and you won’t need it there. It’s a bit like the tracking sensitivity because different people have a different amount of sort of shake in their hand or wrist or something. And so the smoothing will definitely be something that I could imagine that, you know, they’ll, there’ll be a, there will be a setting that will seem to be a reasonable default. But I can absolutely imagine that it’s probably something that we would envisage going forward as something that you would want to be adjustable. Otherwise that would be a feature request fairly early on. Once it got to a wider, wider user base. One of the things I’m still noticing when I start is and I’m explicitly doing this sitting down is is either opening basically the, the when the screen initializes. So the browser when I press the button is in front of me. But sometimes the the visualization is occurring different places. So the height adjustments working the so the panning zoom sorry the the sort of pan in both directions is working.

Mark Anderson: What’s, what’s harder for me at the moment is to put it at a the distance I want. And that may be partly because I’m working in the small space of a, of a sort of seated bound because quite often the other thing is I find that just here there’s a, there’s a blue or a purple axis coming up from the thing. I mean, I know what it is, and I know that we’re building stuff, so that doesn’t worry me unduly, but so I have a slight sense that if it’s possible for me to have some control to move the display forwards and backwards a bit, it would help. Just one other quick thing is. So I really I really like the the way you’re doing the selection on the side, but what I’m finding is I would like to actually, if I want to read that, bring that right around in front of me. So at the moment what’s happening is the the list pans around. So the left hand side of the list is in front of me, and the text is to the side, which actually means it then sits at a slight angle. So there’s a slight, slight parallax to it, if you see what I mean. Does that make sense?

Andrew Thompson: I’m not quite getting the the parallax you’re talking about unfortunately.

Mark Anderson: So we have well, I just I just sort of noticed it’s more that if I could put the, if I could move the the display for want of a better term further round. So in other words, the left edge of it could be. So in other words, the, the, the cited paragraph could be in front of me because at the moment I can’t I can only move it so far. So what I get is the left hand edge of the citation list is in front of me. And then the the selected text is just off to the side, which I mean, which is fine. But what I find I’m wanting to do is to put it front and center, because it’s easier for me to, to read it there.

Andrew Thompson: Well, are you saying you can’t swipe it? The swiping isn’t.

Speaker7: Working.

Mark Anderson: The swiping. Well, it’s just it appears to go. I think it’s because it’s drawing off the left hand edge of it. It feels like what I’m doing is I’m that that I’m moving the display of the main text to its left hand edge. And then it doesn’t go any further. And. But there’s still text. I must tinker a bit more, but that’s sort of something that Okay, I find it’s simply that what I was trying to do was I was trying to put the the, the reference text that, you know, the search that you search for actually front and center in front of me, I’ll have another go, but.

Andrew Thompson: Okay, I, I might be misunderstanding you because it does rotate a full 360, so it should never get stuck on something. Okay, well, you should always be able to swipe it further. Have you looked at the. You said move it further out. Have you looked in the menu? There is that slider that pushes it further away or pulls up closer.

Speaker7: Right.

Mark Anderson: I’ll, I’ll have a go with that then. Yeah.

Andrew Thompson: That’s see see if that’s useful. The text does get small pretty fast when you push it away, but. Yeah. Understood. It’s there.

Mark Anderson: But no thanks. This is. This is super. Thanks for all the hard work.

Frode Hegland: Yeah, absolutely. So I look forward to Andrew, for you to test on the on the vision to to look at those small issues when it comes to moving the. The text further back. You can do it in that in our little control thing, however, it would be best, I think we would agree if you could do it with gesture. So I’m wondering, Andrew, if you thought about this is up and down, this is sideways. Would it be safe to do this back and forth?

Andrew Thompson: No, I think that would run into the issue we had before where stuff is triggering accidentally all the time. I really don’t think you’re going to be changing the comfortable view distance once you find it. Once you find what’s comfortable for you, you’re going to kind of set it there. We do plan on having different depths, right? But they’re kind of going to be snap distances. And we haven’t figured out what the gesture is there yet. I don’t know. I really feel like that’s going to be too much. We can try it if you’d like.

Speaker7: No, no, no, I’m.

Frode Hegland: That’s fine. The only thing that on the control, instead of having it slide this way ideally should slide back and forth. But that’s nothing to do. Now, that’s such a detail. Look at that later. Now, on this, on the topic of I hear a lot of talk about interactions. It’s a little past the hour, and we don’t have Randall yet, but would this make sense for me to talk a little bit about potential interactions for this, for us to look at?

Speaker7: Okay.

Frode Hegland: I will then do a quick slide thing that I made for you guys. This could have been more clearly in the thing. Anyway screen is clear, right? Yeah. So these are the things that’s been going around for a while, and the text is annoying. There’s not very much, but you can see a menu on selecting any identified entity to choose to see further information. When the result is shown, it is its own entity named by where it came from, and user can choose to see connections or not. Those are the design principles for this test. So what that means is for for instance. I tap on Dene Grigar and in this notion you get a huge pop up menu. So this is very, very analogous to a control click menu on a computer. So if we agree on something like this, we need to spend a serious amount of time deciding on what should be in the menu and what is available metadata wise. So in this case we now click on focus. And this is something that I discussed randomly last week with Adam of having a right to left flow of information. So if something’s in focus, whatever it came from is moved to the left instead of moving it away where it’s still clutter in a way, but that’s neither here nor there. So here we have this. Oh, by the way, Dini here’s something I haven’t mentioned to you. Obviously I’m doing my thesis corrections again, and one of the big issues that I have learned that may actually pull it all together is the notion of electronic literature. So you’re being more severely cited. And I can tell you very briefly why the difference between a scanned document or a PDF from a hypertext document that is interactable text is the difference between digital and electronic literature. According to you guys, that is fantastic. That is something that I can use the language of to further what this is about. So sorry I should have told you.

Speaker7: Let me mention.

Dene Grigar: The I’ve published on this before, but the the three characteristics of born digital literature are what I call pi participatory, interactive and experiential. And that shows up in a lot of things. So but yeah, if, if it’s, if it’s one or 2 or 3 of those things, then it’s not a PDF. Yeah.

Frode Hegland: Probably it’s not a dead PDF. Right. So the whole thing here is can we do really rich interactions that are useful in the environment that Andrew started building. So I’m now tapping on publications. The menu goes away and we get a list of all the publications we have by Dene Grigar. I only have two here listed. Denny. Obviously you have more, but for the simplicity of display. So that’s listed like that. And I can click on Denny again. And I choose now to close because I don’t want to see the name. And this is a really key design thing. The heading here is publications by Dene Grigar. That is really really important because you may have 20 of these up by different people. So every single entity that you bring into view must be able to tell you what it is. So now I click on this and I have a few relevant options. For instance, now I can choose to see what is cited by Deeney across all of her work. And I can go into this next one and choose, cites Deeney. This is, of course, what we’ve already done with Mark and Adam in 2D, but the idea is that these come out and they come out left and right, but these are independent objects that can be moved anywhere you want. And because they have Deeney’s name in them, again, it means that they know where they came from. And you can use that to interact to get back or further information. Right. So now we take on 2017 and I want to, for this particular piece, see connections. So now it shows me which ones are in that this is probably going to be very messy. You know, with lots of lines. But it’s an example because we can also choose things like hide those connections and you can also see all connections. I click now on DNA’s name again to get Danny back up there. So now I can go in and choose to also sort things and whatever it is, but just close.

Speaker7: And

Frode Hegland: This is just an intentionally messy display. We’re showing what it could be like if lots of different things were different distances. So I’m wondering, in the framework of what Andrew is building if we keep. The titles and we look at options, we could do something. And yes, voice commands can be very, very useful in this environment, I do agree.

Frode Hegland: Any comments? I think that’s the last slide. Oh, yeah. Just a brief other thing. Some lines can be shown, but not sure about that. And this is what maybe we’ll be talking about later with Brundle.

Speaker7: So, Andrew.

Frode Hegland: I can see your heart now.

Speaker7: Yeah.

Andrew Thompson: I wonder, looking at this the test that you put together, I wonder if it would be smart to revisit. It would. It would go back to what you were talking about earlier, where you can push things further away, but kind of like revisit the more free from movement. But you wanted, like, grab the corners and not the center, and that lets you sort of move it freely. And we already went away from free form. But looking at your messy layout, it almost feels like free form is the solution for that. So I don’t know if we want to kind of step backwards and try to. Do that again or if we want to. Try to adapt because we. I don’t know I guess I don’t really have a, an answer or like a question there. I just like it’s a, it’s a talking point. We should probably have if we want to keep going the direction we are or change course before it’s too late.

Dene Grigar: Can I respond to that before Mark start? So once we since we decided we’re going to be looking at writing an article, right. Preparing to write an article, right. That’s the first activity we’re going to look at. I think anything we build right now is going to be devoted to that activity. Is that correct? Am I understanding our mindset? Froda.

Speaker7: Yeah, exactly.

Dene Grigar: So, Andrew, you and I can sit down tonight when you come by. I don’t know if you’re if you can come by long enough to sit down with me. We can talk about that and how we can structure what you’ve already done. Because I don’t think you need to recreate the wheel. We just need to, you know, build on it. Mark. I’m sorry.

Speaker7: No, I.

Frode Hegland: May also comment on that. Me me me.

Frode Hegland: The notion of what I showed now is only asking you guys, and this is what I hope Mark will comment on. Do you like the idea of having a kind of a contextual menu that can spawn new things, and if so, we should have a discussion what that should be in order to to serve the workflow that Dany and Mark are talking about. That was the point of that. So Mark, please.

Speaker7: Okay.

Mark Anderson: Well, I think my my take on that is slightly meta in that what what echoed for me when you showed me that is I was thinking. Right. So the menus at the moment will be that’s like the onboarding view of it. When you actually use it, those probably will become much smaller prompts. So it’s it’s something that probably will be. If we imagine these things when they’re more mature, that because you would probably be configuring these things. Because when you’re actually using this for real, you probably wouldn’t want it. Once you understood how to use the system, you wouldn’t want the dialogs because you probably know what they are. And whether it’s a, I don’t know, a blue circle or a yellow triangle or whatever, whatever you whatever, however it’s manifested that you probably wouldn’t need eventually to have that, but it seems a very useful way to explore that idea, using the essentially the, the, the notion of a context menu, which we’re probably familiar with from a normal sort of desktop operating system as a way to, to explore that. And in the same way, the objects that we might create. I saw what you showed as an exemplar of the wider thing. So at the moment, understandably, you’re taking things that are sort of obvious because they’re there and we know about them, but in reality, thinking about this exploration space. So going back to this point, about writing an article, the objects, the things that you might be creating will probably vary by task and by by person. And we don’t have to worry about what they are. But if we understand that essentially those will in due course be configurable. And there’s a there’s a loop in this in the course for them to be configurable, we will have to figure out what metadata they need. So there’s another bit of building to do that. But I to me it all hangs together quite nicely. Thanks.

Frode Hegland: Thank you. Yeah. So, so many things that first of all I also saw what you wrote under I think that these context menus need to be really massive because when you have, when you’ve spawned one, it means that that is the most important thing in your entire world for that moment. So by having it massive, it means you don’t have to fiddle. And it also means that I think this is what you meant, Mark. It becomes a gesture. Right? So you know that if it’s really quite large, if you want to always close the thing, you just do boom, you know, so it becomes a gesture because you know where it is. You get the muscle memory. When I worked on the NBA, the American National Basketball Association official Chinese website, this was an issue that came up because you could select any player, any game, any team. And this information thing came up and the people who were working with, they wanted it to be subtle and semi-transparent. I said, what? Why? No, we tested it. You’ve spawned this yourself. You want to see it? It should be clear. I think it’s exactly the same here.

Speaker7: I just to make.

Mark Anderson: A clarify, I which doesn’t in any way disagree with what you’ve just said, which is that. Yes. I mean, I think it needs to be clear, but I’d also envisaged that going forward that it would be it would be the sort of exactly the sort of thing that would sort of become a preference, because that that choice of things could probably become a small set of little color differences or something that a practice user, because they would be more used to the gesturing and stuff. They would know the meaning. So in other words, you wouldn’t have to have the interruption. But I would absolutely expect it probably to be there in an onboarding mode or for people who are very casual users, because otherwise you’re going to get lost and your point is well made about.

Speaker7: You need.

Mark Anderson: You need to be able to see that menu.

Frode Hegland: Can you can you elaborate in what case and what’s what would disappear for a more experienced user?

Mark Anderson: So given that in if we hold in principle that you could with appropriate information underneath sort of say okay, so when you know, when I want when I want an author, an author’s context menu such as the one you showed for Houdini in your, in your demo there. That actually the things I want are x, y, z. I want, I want these things and they might perhaps be just attached to very small colored.

Speaker7: Object.

Mark Anderson: Cuz the point being that it’s about reduction of noise. So once you’re once you are familiar with the interface and so you’re not just trying to learn, you know how their context menus and things work that a mode that rather as we talked about, you know, turning the laser line on and off, it’s a kind of transfer that I would see as possible and desirable so that the the more practiced user would actually have much more subtle cues, because for them, they’re, they’re not they are they know where to look and they know what the cues mean, because in essence, they set them up. So this would be the kind of thing where you would personalize it for yourself. And I don’t think it’s something to shoot for on day one, for the precise reason that that sort of customization is something that’s quite personal and we shouldn’t try and do it too early in the process. I was just thinking forward down the line that this is nice, and I don’t see I don’t see a menu like this as being something that we have to hold on to forever. Because part of its purpose is to be seen and understood. Once you actually understood, once you understand what it is and where and why it’s there. You may not need that degree of, of sort of boldness of display, but I don’t see and I don’t see those two things as, as as antipathetic to one another simply because you would transform some would never leave what you’re showing and others might move beyond it. And that’s great.

Speaker7: Well, okay.

Frode Hegland: So this is really important in terms of design. So like with liquid, when you select text and you spawn the menu, you get a little toolbar with the text, right. On there you can mouse, you can keyboard, you can do all these things. But the functionally in work for me now, I’m so quick, I don’t see it. I just, you know, I go through it and it doesn’t even flash up on the screen. The thinking here is that basically to keep it massive. So that your hands know where to go. Right? So I think that if you make it more subtle, I don’t think it’s going to actually help anyone. And what I can imagine, though, is if you have a trained because your perspective I agree with I’m talking about the implementation. Right. So I can imagine that once the system knows you’re a bit experienced, what it might do is actually delay showing you the menu at all. So you just know where things are with your hand. Right? So there are many things we can experiment with that. Absolutely. Andrew.

Andrew Thompson: I think like a straight up delay might be frustrating, but if you have a fade in where like you select it and it fades in over the course of like 0.3 seconds or something which is pretty common for menus. If you’re fast, it’ll start to fade in and immediately fade back out so you barely see it at all. And if you’re slow, it’s still fading in. So you, you visually see something’s happening. It’s not frustrating. It doesn’t feel laggy. That might sort of work for both of your ideas.

Frode Hegland: Yeah, I think Mark and I strongly agree that it doesn’t actually matter any of this, because it’s once we put this on someone’s head in Poland in September and that’s the use case. So it’s the first user. And Peter those little dots are. Very something or other. I think this, you know, when the idea here is we have to balance introduction with power obviously. Right. So now we have to tell someone through in person when we’re there. Otherwise it’ll be through some other means that once you’re in up and down to scroll, you know, sideways, that’s easy. And then when you want to point at something, make your pointing finger and then tap to select. That’s it. That’s really, really. That’s really, really good, right? We don’t have too much. No, Mark. Me neither. But it was a worthwhile kind of detour, so to speak. So I think that as we talk more about the specific things we need to do exactly what you said, we need to find out when Denny is working on reviewing a document. What is the metadata that she needs to have accessible in these contexts, and how should it be made accessible? And I think for me, the big kind of breakthrough, you know, we have them all the time. But it was yes, XR is good for visualization, but it’s even more powerful for interaction. We need to make the interaction snappy as anything, any.

Dene Grigar: Yeah, that was a good question about the metadata. When I review essays, I review these reviews for Leonardo, I’ll get like 5 or 7, sometimes ten depending on the month. So every month I do this activity. You know, they’re all word docs. Primarily. They’re coming to me as word docs. There’s no metadata. Everything is on the paper. And generally it’s wrong. The author has does not number one follow any kind of guidelines, editorial guidelines. So if we say don’t double space, they fucking double space, right. So part of my job is to clean up their messy papers. But then I look at the metadata that’s in the paper itself. And it’s the wrong page number. They haven’t got the right ISBN number. They sometimes don’t even spell the author’s name right. So what I have to do is reproduce that and send it back out as a you know tf. Yeah. Ideally, once I clean that up and save it as a PDF, that metadata is embedded in that, you know, in that article and can then be pulled out through the headset or some other thing right now. Leonardo is in a transition moment. They’ve just gotten moved over to University of Arizona. They’re getting a lot of money poured into it. They’ve moved away from MIT. So I’m hoping that this new stuff that we’re doing will allow us to stay away from TFS or maybe produce two different docs, one that would be a PDF, and the other one, the TF that goes straight on the web. So that would be my recommendation for them. But but I don’t get any metadata from this. The metadata I do have on the page is usually wrong.

Frode Hegland: I think you also need to consult RTFM. Sorry. I thought you were. Mark, that’s read the fucking manual. For the authors. But, yeah, that’s an important point in most cases. But in this case, we are, of course, dealing with perfect metadata. It’s one of our sales points with the ACM. We’re going to fake it and demonstrate this is what’s possible when you people out there in the world actually give us proper metadata. So it’s very important.

Speaker7: Mark? Yeah, I was just.

Mark Anderson: Going to say I was interesting. Something. He was talking about the information turning up and one of the sort of parts of the journey are making the big ACM data set, which which was, in a sense, accidental. I just wanted something to do a visualization with. And it like topsy, it growed. But one of the things I’m looking back, I was constantly asking myself, well, if I want to I want to be able to look at this particular thing. Okay, well, what do I have sufficient information in what I’ve collected? To actually explicitly say that this goes back to what the computer can know against the human mind, because if it’s got to be inferred, it probably ain’t going to happen. It’s got to be, or we’ve got to. It’s got to be inferred through a pattern that is can be populated to be fairer to the to the software. And yeah. So that that’s really interesting because the point Denney’s making about, you know, which I totally cool with this is a problem I see for our current generation of tools is it’s again, it’s unintentional, but we’re so focused on print technology as being the sort of the, the bit that we understand. It’s the Wysiwyg problem that it’s right if it looks right and actually well. And as Brandel said, you know, I come over as a Monday meeting or a previous Wednesday, so said, well, one of the things, for instance, about something not necessarily HTML, but things like it is they’re just not that opinionated. They know it’s going to be shown on some of the side of a phone, or it might be projected on the side of a building.

Mark Anderson: And there is a there is some degree of trade off between our choices of things like fonts, typefaces and actually the, the, the interaction that the reader has and the reader needs to have some agency in that for them to have agency. We we need to have sufficient extra information to allow us to restructure a document as we want. And the really interesting thing that’s come out of the, you know, the month or so we’ve had recently of talking about academic reading is we have gone from showing a document as sexy as a rectangle to thinking about actually, the tractable bit we’re playing with now is, is the references, which is no surprise. Because breaking it out of its current form is hard to do off the page. You’ve got to do that in your mind, basically. And yet we know that the references exist as essentially part of the citation tree, a thing that, you know, people was talking about in academic circles, but has to be imagined generally. It’s very rarely very rare that you can you can actually visualize it. So that’s one of the things I really take out of this is, is that something for a generation of tools we haven’t written yet is something where the metadata stream. So you’re not just writing the narrative, you’re not just writing the words on the page, as it were. You’re also recording information that is. About that. What you’re writing? Either either directly holding special parts of it because you’re going to do things with it, various transforms or information about what you’re writing, because it’s it’s a necessary part of the the consumption environment it’s going to go into. Thanks.

Frode Hegland: Yes, obviously I strongly believe in that. And I think we need to just. Do that now. Just fake it. Right. So that’s good. We need to have a demonstration of how it makes a difference. So Yeah, Brando has been pulled away to real work, unfortunately. So he’s not going to join us today. To talk about the this side of things, the metadata and so on. So we’ll we’ll put that off until when we can. But So, Danny, are you there properly? I know you have a lot to run off to, but I have a question for you. Pochettino is doing things so that I’ll ask you, Mark. Do you think you could. Theorize, think and write down. You are doing a literature review, going through a few papers, and you expect full 100% metadata. Everything you want. What kind of controls would you like within the system? What kind of buttons should we aim to build? Because it’s actually, you know, the slide show you saw that that I made. It’s really simple, but it actually took a huge amount of revisions because, oh no, we need that thing. And oh no, this other thing is covered. So if you guys work on that, that’d be really, really useful because then also Andrew will have a better idea of where we’re evolving to. Right.

Mark Anderson: Sure. I mean, I definitely think one thing to hold on to is this sort of a generalized notion of a jigsaw puzzle, you know, because you turn all the pieces out on the table, you don’t know quite how they fit together, but you do know that they will fit, hopefully, hopefully got all or most of the pieces, but you may not have a picture of what it looks like at the end, but you have a sense it’s there. And so the, you know, the so it’s a sense of being able to juxtapose things and sort of it’s, it’s capturing soft relationships because this is sort of one of the things I that is obvious after the fact. But when I looked at some of the sort of early things that people were doing and things like Noda, that you end up with something that looks really cool and you realize it’s sort of Visio in 3D. And actually that doesn’t add as much as you thought it would, because you end up with lines between things, and it’s fine that you can turn it around and stuff, but the novelty actually wears off, and I find myself asking what extra I got out of it. But the ability to just in the way that people use 2D sort of mapping spaces at the moment. I think the other subtle difference is that is, is to remember that, that it’s not a graph, because a lot of, a lot of the science part of this is about drawing graphs which implicitly have relationships. And one of the interesting things you’re doing at this early stage of synthesis is you’re creating the relationships. They’re imputed. But then but you, you definitely aren’t trying to make them manifest before they need to be. Which is which is a challenging thing to do. But I think having this sort of plastic space where where you can put things where you need to so you can begin to use it like an outboard brain, I think offers us some, some useful options. Great.

Frode Hegland: So we’re now past that. That is a very important frame. I agree with everything you said. I think now we’re at the point and this is what I was asking both you and Mark. I think it would be really, really good. Now, if you guys start writing down with the assumption you have perfect metadata, a perfect all data, you are doing literature review or going through one document, whatever you prefer.

Speaker7: What are the.

Frode Hegland: Things you want to be able to actually do? You want to be able to see this. You want to be able to see that what kind of relationships, what kind of connections. And then we can start experimenting with some that same implementable. And then Andrew can build it and we can see if it’s a complete mess, in which case we try something different or we can notch one up to success. I mean, I have to say. Yeah. No. Go on. Yeah.

Speaker7: Mark, please. Yeah.

Mark Anderson: I just think one axis that we can play with early because it’s sort of it’s implicitly there. Well it’s explicitly there and we can play with it. Is is just time. So if you have a whole lot of if you have a whole lot of in a sentence backstory, your literature review, you are innately looking back in time into what has gone before. I’m having some sense of the order of it, because often when you come to it, when as you read into something you don’t know, it just arrives as a sort of a layer of things. And the some of the structure, the, the derivation that’s inherent in that is not immediate to, to the, to the new reader. Just seeing it with some sense of a temporal order, I imagine, I stress imagine could be useful. In other words, this came before that or these things happened roughly around the same time, because it might be that what you’re actually looking, for instance, you’re synthesizing 2 or 3 concepts or 2 or 3 ideas that are tracking forward together, that are reaching synthesis, perhaps in the document you’re writing. And it’s it’s quite interesting to know. It’s a bit like it’s a bit like the thought experiment of, you know what? If these people have been in the same room at the same time and or maybe at the same conference or something as, as people sort of do with historical figures. And so I think actually that’s something we can play around with even without, you know, words without necessarily having to use data even it’s just thinking, well, so what what can we sort of show by way of temporarily, temporarily get the word over there?

Dene Grigar: Years ago when I was working in the Mu environment. Mark. I did that, I did that exact thing. And what I did is I took various texts that people were reading at the time, and I put them together as if they were in a virtual environment, having a conversation about virtual reality. Right. I’ll send you a link to that. But I looked at, you know, what would Plato say about this? What would Aristotle say about this? What does Jay David Bolter say about this? And they were in a mu having a virtual environment discussion, just like we’re having right now in the zoom room. Right. But I’m using direct quotes from there, you know, from their writing. So I’m not making up any words. I’m just saying this. Plato says this. And then Jay David Bolter answers with this. And so the dialog was an incredible experience because I didn’t it wasn’t fiction. I don’t know what to call that genre. It really was very interesting.

Mark Anderson: What’s really interesting what you’re mentioning there is well, I don’t know what axis that sits on it clearly does. I mean, there is a plane, there’s the plane of the implied dialog between those, those views or those statements to be made. And it fits somewhere in your thought space. So that’s actually a really nice example of something kind of thing we could do. And that, that actually doesn’t that that nicely does not fall into an obviously well titled box.

Dene Grigar: Yeah, I’ll pull that. I’ll pull that up.

Speaker7: Yeah.

Frode Hegland: Okay, so here’s the thing. A lot of what we need can be done in a list if we have. Here’s the really Mark and me thing with good metadata. Even on a 13 inch laptop screen, you can actually get a lot. Obviously that’s not what we’re saying. You should. But for instance, with the temporality, you should be able to say this bunch of documents I need to have all of them combined into a list. That’s actually a really important thing. We have the notion of. And I’m very happy to change the word later, but we have the notion of a single PDF could be treated. The reference section could be treated as a mini library. But so for you to be able to say, take these X amount of documents, turn them into one library. And then order them in such and such a way. Compare them to this other corpus. Yeah. That’s important.

Mark Anderson: You know, if you have information, there are all sorts of things you can also do in the sense of the co-location, the loose sense. So, you know, people who work in the same country or in the same city or something. You can you can sort of play with all these things, because if it’s metadata that you use to gloss the object, which is in this case the reference. And we don’t really need to know what that is. We probably, in our minds, I imagine, a bit of bit of string of text like a title floating in space, but it matters not. So the reference is the object which we will then dress. With with additional information and use that information to decide where we put in the display space. And we’re doing it as an aid. We’re doing it as an aid to understanding and synthesis. And also, I mean, to finish out is that, you know, it’s something that probably at the moment in our various ways, people either do it in their mind’s eye or they do it on, on their jotter or they do it on a whiteboard. So often we externalize these things. And sure, we do on a 2D space, but I, I think that the issue here is it’s not about it’s not so much 2D versus 3D. It’s it’s the plasticity of the space and the ease with which, given the given the appropriate information, that the environment can actually represent things to us. You know, we don’t have to keep drawing it out ourselves. It can be drawn out. So you just have to have the you have to have the metadata and you have to have the and the you have to be able to describe what you want, but then you have this lovely sort of plastic environment.

Frode Hegland: You’re talking about plasticity. I call it interactivity. I agree with your language, but I think we all agree we’re trying to allow the user to do here is build sculptures. And incredibly smoothly. So the interactivity, the plasticity. Yeah, I’m very, very happy to really highlight the word plasticity that that is the point.

Speaker7: Yeah. Yes.

Mark Anderson: I mean, I use that word because I, it’s about doing something that, for instance, is rather hard. I can take it, I can take a sheet of paper and I can draw on it, and yes, I can fold the paper sort of thing, but I can’t once I once I’ve drawn it, I’ve got to basically redraw the diagram to do it. Having something where you can just in essence inform the environment. And no, no, I actually want to, you know, I don’t want it this way. I actually want I want this strand of information from it. And it can draw that for you is tremendously powerful.

Speaker7: Yeah.

Frode Hegland: Absolutely. So we don’t have a mechanism now to combine. Library, so to speak, but I think we can get to that later.

Mark Anderson: With a useful point. I mean, one of the other things we’ve, you know, wrapping my brain from previous discussions is the idea that I think it’s useful to have this idea of the sort of library resource in the sense that that’s also where any additional glosses or extra information about the reference entities will sit, because it might it might not necessarily be, for instance, surfaced into the traditional print narrative, for want of a better term. But it’s definitely there in the document. And it’s there. It’s there with the intent that people who are using the document in a rich environment or in a tool that can understand that information, can give them the can give them this extra information. Because certainly from the conversation I had with Dave, we had a quite long conversation on this on a while back. Is that The thing to shy away from is to get ourselves in the trap where everything has to be codified. What’s what’s more useful is to have affordances such that if you have something interesting to add to gloss on to a particular reference or something, you can do so. But there isn’t a thing that said, you can’t now go to the next stage until you fill out all these boxes. And please choose from, you know, this 160 choices that are in there that that way madness lies. But certainly the ability to put in extra information to inform things like an exile space I think alternative valuable.

Speaker7: Makes sense.

Frode Hegland: Adam, did you see the little slide stuff earlier? I can’t see you. Are you talking?

Speaker7: Well, no.

Frode Hegland: Because I have a What? What Mark was talking about that. I just want to show something. So. Yeah. Sorry. Now I can see you. Adam. Did you see the slides earlier?

Speaker7: No.

Adam Wern: Not really. Did you post them to base camp?

Frode Hegland: There was a There was a video. I’ll go through it again now, because now I think.

Speaker7: No, no.

Adam Wern: Yeah. No need to do it again. I can rewatch the video. Well.

Frode Hegland: Is this for a point? I’m going to skip to something. So the idea is that you should be able to select pretty much any known entity in the environment, and to choose how to view it and how to see relationships. Right. So in this example on the list of chosen Dene Grigar and I now do focus. Focus does what we talked about, the main thing goes left Denny is now here. And because she was just focused that means we have her default menu. So the notion here is to have massive menus. So they’re really really easy to gesture. Click right. So we’ll start with Denny Publications. Here they are. And then we’ll close Denny the the name. We don’t need that now. And this is a very important thing. Publications by Dene Grigar. All the headings have to say where they come from, what they’re what they are. So you can always go back. Right. So if we now click on this again we can then choose relevant things. And this is what I hope we will be able to develop more and more specifically useful things to come up. Because it’s a reference list. You can choose what the citation directions, but also look it has sort by dates. There are many different sortings that should be able to be done immediately here. Right. So we’ll just open these things, which is what you and Mark have already built in 2D. It’s exactly the same, except they now happen to be three independent rectangles in XR space. You can move them anywhere you want. You select a specific item, you get more, more opportunities, like showing connections. So this is what this one particularly has cited that stuff. And then go a little bit further. He we go back, tap on Danny here, we get the Danny name back because that’s got keys to all kinds of things.

Adam Wern: And so a question here in the in the middle of it, the context menu you’re showing here is that kind of is that a design or is that just conceptual. Are you imagining a real context menu in front of everything.

Speaker7: Yes.

Frode Hegland: Not particularly this, this visual design. But by the way, the last thing I wanted to show is actually on this slide, and that is no.

Adam Wern: But as a menu list thing in front of the content, however it’s designed, is that I’m.

Frode Hegland: Going to address that. Adam. Exactly. I just wanted to show everyone because I changed one thing. So because this thing on the left is a list of documents, there is the option to combine with other references. We have to look at the language. But this is what Mark was saying earlier that let’s say this would be his option to choose which other ones to go in, so he can then do a timeline and lots of things like that. So that’s the thing. Yeah. The thinking here, Adam, is that it doesn’t. Help anyone if this is hard to use, obviously, right? What I mean is that when we show this to people, we show them you do this to go up and down, this to the sideways and this to point. That’s a really good introduction. Beyond that, to give a really plastic environment, to use Mark’s words, which I agree with, really interactive environments point to a thing any any selectable units and you get this massively huge menu. It’s massively huge, so that over time you will know where things are. So it becomes a gesture. You know, you do this and move down a bit and you’ve selected it. But in the beginning a new user will just duck, duck, duck, duck. You know, like anybody starting with a mouse as well, right? They will take time to use it, but it just means any time you select a heading, a command, a thing, you get these options. And we if we go this route, we need to decide and test what options make sense and what data is available. Right. I’ll just skip to the last slide as a little bit of a background for that discussion.

Speaker7: Because you see.

Frode Hegland: Here, all of these little things are their own little rectangles that can be moved anywhere the user wants them. But because they have the same type of title on top. If you point to one of those titles and do a select, the menu thing comes up. And finally, this is really key. And this is what I’m so extremely happy to see that Andrew has done in a way that is Implemented this that you have an option to. Show connections. Right. So these are aware of them of each other. So here. The user has selected this document and chosen to turn on connections. So you can see these lines coming on demand. A pie menu and all of that. Peter, I don’t want to do because this is working in XR is a new experience, and our fingers are very good at doing up and down. They’re not very good at doing other things, which is why when you click on a mouse, you click down. You don’t click forward. And how you choose things in a menu you have pop. It’s called a pop up menu. It it actually goes down for that reason. When you select something, you rarely get a menu on top because then your finger has to stretch more. So there is the physical constraints of our knuckles. And so. So all we’re trying to do now is impress one person. Our user in Poland puts the headset on. Does this. Oh I can select something and they say wow, these are useful options. That’s it. By the way, that was both a statement and a question.

Dene Grigar: A photo. Andrew and I have to leave at ten to get to my own lab meeting. So do we want to get to next steps?

Speaker7: Yeah.

Frode Hegland: Yes. Adam, please say what you were going to say, and then we’ll do next steps.

Speaker7: Yes.

Adam Wern: Well, I wouldn’t dismiss the radial menus or kind of pie menus at all. Those are very common in XR there. They are excellent just for for their gestural reasons that, you know, that app is always kind of expanded biography. Lefties always get her references. Right. This kind of see see who cites her. Much easier than kind of guessing that I want the third item on the list. So radial menus are and I think Brandel has also kind of. Whoosh and not pushed for them. But we I think we are quite a few people who like them more than context manager. I hate context regular context menus because because they hide the content behind. So.

Frode Hegland: I did actually experiment with the radial menus in this when you select the deny, you would have a sighted, blind sighted right and left. The reason I don’t want to work on them now is because we don’t know what’s going to be on the menus. I think once we get a more mature set of commands, it is absolutely appropriate to do that. But right now, we shouldn’t worry about where to put them because it may be, in many cases, just three commands or four, and it would make perfect sense. In any case, we don’t want to run out of space. That’s all. It’s just for now. While we’re going through Danny and Mark’s use cases, I’m absolutely not completely saying we shouldn’t add them. Right. Danny and Andrew has to go. Let’s go through next steps, and then we can continue a few more minutes for the rest of us. So next steps. Then the what you have written, how will you make that available to us? Will that be a document base camp or will it be in slack?

Dene Grigar: I’m going to be working on the structure of this. So what I’m going to do to to lay this out so I can start to do like a literature review a little more in depth and And then I’ll just. I’ll be writing. And then as I write things, I’ll make it available in pieces. Right. Review and I want peer review. So as I write I want comments.

Frode Hegland: So you will put that in slack right.

Dene Grigar: I probably will put it in base camp with the link to slack.

Speaker7: Okay. Yeah, that makes sense.

Frode Hegland: That’s fine. Well, you do the same, Mark.

Dene Grigar: We didn’t get a chance to say this to Mark, but Mark, I do want if you could take the lead on library when he when you get a chance, and I’ll add to your material and you add to mine. So if we can take the lead on each of these things, we could probably get it done.

Frode Hegland: And also decide what the notion of a library is. All right.

Dene Grigar: Well, that’s what we’re talking about. Like, what is a library?

Speaker7: Yeah.

Frode Hegland: Exactly. So Andrew and Danny sneak away when you have to, unfortunately, look forward to seeing you. Maybe slightly on. Well, say on Monday. Fridays are a bit casual now.

Speaker7: Very quickly.

Mark Anderson: For Danny. I’ve got those two documents you put up in the chat, but I’m not sure what format because I’m struggling to.

Dene Grigar: They’re very old word docs, so I open mine up in text edit fine, that’s okay. And edit. But the nice thing is about when you open it up in those old text edit programs, the typeface stays intact. It’s meant to look like old computer typeface. Okay. Like new typeface?

Speaker7: Yeah. Okay.

Dene Grigar: And it’s about writing in virtual spaces. So I was thinking about this in late, not late 1990s and writing and publishing 2001. So.

Frode Hegland: Right. So Adam, did you feel that I was being horrible and dismissive on the pie chart, or do you. I mean, I don’t mind running with it in parallel to do design at all. I’m just trying to highlight my concern that we need to know what’s going on in there first. I don’t know, what do you think?

Adam Wern: And justice so long we just as we keep that in mind, that it can be a other kind of menus that are not obscuring the context or are more compact in a way. A list is harder to hit in extra, I think, than kind of cardinal directions. So I think it’s very fruitful to have those ideas, and I’ve seen many good interfaces with those. So I would rather start with a kind of the, the pie menu or idea or a kind of radial menu idea than a list because it, it’s easy to get stuck into the kind of context menu from, from the 90s ideas. And so I think it’s good to start on the other side, but but you decide it’s not. No.

Frode Hegland: This is why it’s so great that you are doing special projects as well. If you want to experiment with that. That’s absolutely fine. It’s Yeah. No, no, that’s just good. That’s that’s nothing wrong with that.

Speaker7: I just chime.

Mark Anderson: In? I think in a sense that the AI perhaps sort of mis explained it earlier. My thought about when I was sort of saying about, you know, things being able to change is the way I, the way I think of this, the issue that’s raised about about the, about the sort of context menu or whatever is don’t think of it in terms of its visual manifestation. Think think about the affordances that’s got to go. And if one day we want to show it as a context menu, or another day we want to show it as a pie chart or something that doesn’t matter. So the key, the key, the more important thing, which I think is probably counter-intuitive for the way we’re all used to doing things, which is sort of very much we go with the visuals is just saying, no, no, actually, what are the what are the what are the the actions? What are the choices that we want to have there? Because part part of that is that the the choices themselves may give us prompts as to nice ways to implement them. Other other than purely textually.

Frode Hegland: Probably why we are going at it from both sides at the same time. Because I was highlighting that position. But we should absolutely look at the interactions too, because that’s what makes it possible. It’s been really fascinating to be working with the various headsets for a while and find out that a lot of what we thought just doesn’t make any sense.

Speaker7: Yeah.

Frode Hegland: Adam, what are your priorities or interests? You were busy with? Family stuff, so please take our time now.

Adam Wern: Well, I think we’re nearing a point where it could be good to share the source code a bit internally. I don’t think publishing on GitHub is important, but I think many of the bugs we’re encountering is something that kind of me or Brandel could take a look at and just kind of performance issues or jitter, because we have I think we have been playing with that a lot. So we know some tricks of the trade to to

Frode Hegland: I thought you had access to the code. That’s what Andrew said earlier in chat. Andrew, are you still here?

Andrew Thompson: Yes. I’m still here.

Speaker7: Okay.

Andrew Thompson: Yeah, the code’s on GitHub. I can push this current version if you’d like it.

Adam Wern: Oh, nice. Nice, nice. So I haven’t seen that for some reason. I haven’t gotten any notifications, even though I’m just subscribed to the project. Oh. That’s great. Then I have no. Then I will actually do my job properly instead. That’s wonderful.

Frode Hegland: I’m glad that’s working out.

Speaker7: I just had a quick thought.

Mark Anderson: For for Andrew. I really like the idea of the search thing you’ve put into the latest demo. I’m just wondering if it might be easier for people that, as we’ve got the info or I could make up for you a list of, say, the titles, not necessarily the full title and subtitle, but just having a pop up list of because it’s only about, I don’t know, 30, 40 things. It’s probably easier for people to pick than that, than try to remember a long, a long number with a dot in the middle, which isn’t exactly the way most of us remember these things. Anyway, if that makes any sense.

Speaker7: I don’t know.

Andrew Thompson: If it’s necessary. Because right now we’re just for testing, so testing can be a little bit cumbersome. So I don’t think putting in a number is really a problem. It’s more of like the fact that it can support different ones is the point. Obviously when we start to make this more with the libraries, we’ll want to change stuff, but I think that’ll be a lot more internal.

Speaker7: Yeah.

Mark Anderson: The other thing is for anyone who does play with that feature is I think you’ll find it’s you probably need to last. You probably only need to think about the last 3 or 4 digits of the long number, because the bit, the the bit before the dot is just the uid of the conference of, of the publication. And the bit after the dot is, is the uid of the, the item, the paper itself, just in case that helps anyone.

Speaker7: You know what the.

Frode Hegland: Craziest thing about the project is so far, I think. Anyway. Of course you don’t. You don’t know my brain. It’s actually quite fun, isn’t it?

Speaker7: You know, it is much.

Frode Hegland: Less stressful than I imagined because, Andrew, you’re making much more progress. Meaning, you know, we get to have these discussions and look at different things. And it’s not like, oh my God, we have to deliver now. So the journey is just absolutely amazing. We are properly recording it, properly disagreeing with things. So yeah. Thanks guys.

Speaker7: Yeah. It’s Yeah.

Frode Hegland: Hang on, I have to write a message out to the people doing my

Speaker7: Didn’t doing.

Frode Hegland: Author. One second.

Speaker7: So did.

Adam Wern: Andrew go?

Speaker7: Oh, yes.

Mark Anderson: Yeah, he had to go.

Frode Hegland: Yeah, I had to go to campus. And now we can talk freely about the Washington people. Yeah. So on and so forth.

Speaker7: Yeah.

Frode Hegland: I really want to. I really want to find more commands and make sure that I can fit them and add them. If you want to work on different kinds of ways of presenting the menu, I think that’s just great.

Speaker7: You know, I’ve.

Frode Hegland: Been working quite a lot with the with the Vision Pro at coffee shops and so on, and there’s so little I can do. I can now work a little bit in author because it mostly functions at the native view. The native window is clunky. You know, I had this dream of a really tall column. It just won’t let me do that as a native thing. That’s not something we could know. So at some point soon, I think we’re ready to go from something like that and actually do some work in this space.

Adam Wern: Yeah. And also, I think it would be worth while still to try out the the few. Professional or real tools that people are using in Quest or Apple Vision, but not in quest, Iraq. A few architecture tools and 3D tools. They have all those radial menus and everything. Andrea is released their new software the new menta Numina news. Yeah. The company she’s with released the new architecture software and lots of interesting menus, menu systems for kind of quickly grabbing tools and manipulating workspaces. Of course, architecture is visual and spatial, but some of it applies to to text as well. And so I think instead of doing our own designs, it’s good to also survey what’s the current state of the art and tape and carefully steal or pick the good ideas from there.

Frode Hegland: Yeah, absolutely. I don’t know if you saw me typing, but I was typing to Andrea just there to ask her to send me send send me directions to where to go. I’ve been very much in the Apple headset. I need to to check that out. You’re absolutely right. If you have a link to it. Adam, that would be great. Also, it is in the quest kind of experimental store or something, isn’t it?

Adam Wern: Yeah, yeah, but you can find it if you search for Space Elevator. I think it’s called. And the quest store it will pop up. It popped up for me at least. And it’s marked as an App Lab thing, but. But it’s in the regular store marked as a lab. Lab version or yeah. You don’t have to do that right now.

Speaker7: No, no, no.

Frode Hegland: It’s just I’m not here very often. Okay. Yeah. It’s Well, the night and day difference, right? Yeah. No, that’s a very good idea. So you’re saying Adam, you want us to start with looking at interactions?

Adam Wern: No. No, I think you’re doing the exactly the right thing thinking first of what the user is doing and then finding the interaction interactions from that. And I would wouldn’t do it another way. Of course I’ve done it another way. I’ve explored interactions as single pieces, but it’s better to start from the work. So a third alternative, from the kind of context menus and radial menus, is actually attaching different tools to the borders of the documents or the objects, kind of If we have. If we have you shake your hand, but you you use such objects all the time without noticing. Apple uses that as well. So I’m sure when you see your laptop and there is a kind of a expand workspace from your laptop, it’s over the laptop. When it comes to moving windows, you get a thing under. So there are already tools that you don’t think of right now. So you shake your head, but you’re wrong. Yeah. No. Yeah. So. So for.

Speaker7: Is that.

Frode Hegland: Yeah.

Speaker7: Yeah, yeah.

Adam Wern: So to finish, the idea is that if we have a name, we could attach the different directions. It could go where we could go from that name at the sides of that object. We could even attach small previews. So if there is a PDF attached to a working title, we could see a small representation of that. Document itself like a thumbnail image and just click it and it expands over there. So we have other ways of doing the the menu, so to say, where it’s not in the way. And and not I find the context manager very clunky. They are not quick. I want the shortcuts keyboard shortcuts as often as I can. Yeah. The second best thing so attaching things is a better according to me, it’s better if you can do that. Pull it off.

Speaker7: Right.

Frode Hegland: Keyboard shortcuts. You have an author a lot. I’m experimenting with that there. Because author with a floating thing is just rubbish. You need the keyboard. So, you know, that’s that’s that’s there. When it comes to this, I really think we need to balance. A visual should be for information. Hands should be for control. So to have things that are persistent on the screen, even on the Mac screen with the red, orange and green button, I find that. Not so ideal.

Speaker7: And when we said.

Adam Wern: Persistent, I didn’t I. I thought of them as kind of appearing. If you select an author, you get the options to go from there, but they shouldn’t be on all the time. That would be horrible to have a kind of Christmas tree of menus that Adam.

Frode Hegland: That is what I’m doing. The context menus are only there when you select something.

Adam Wern: Yeah, yeah. But what I’m saying is the different the 5 or 3 things you have on the context menu could be small icons on the object itself when you selected it, so you could expand it out outwards. Instead of having context menu in front of the object, you could have kind of directions to go from the object to or transform it and so on. Especially when it’s more information kind of get the references. So you could have an icon ish thing or text on the side that says. Citations and you click, it expands. Or it could be a mini thumbnail of it or.

Frode Hegland: You caught us in the middle of

Adam Wern: We’re fighting. We put a context menu in 1 in 1 ring corner, radial menus another, and we have a third option where we kind of decorating buttons on the side next to an object.

Speaker7: Just to show.

Frode Hegland: You a little briefly what it’s based on and it’s the notion of That the user can get some sort of a menu on any identifiable entity in the environment to get further information, and when the results are shown, that thing is its own entity, which is named where it came from, and the user can choose to see connections or not so relatively obvious things. So for example, user here selects a document. And this is the whole thing we’re discussing. A huge menu comes up. I think that by default it’s useful to have a menu like this, because we’ll probably be changing it a lot as we experiment with what there will be. I completely agree that placement is important. So for instance, close might be an X on the top left hand corner, I don’t know, currently it’s just way at the bottom. Adam is suggesting something richer, which of course I agree with. But the way this works here, you select somebody’s name and choose focus. Everything else is shunted to the side. And then now you get like a let’s not even call it a context menu. Let’s just call it a menu, like a huge restaurant menu with options. So the. And now it gets maybe more interesting. Now you’ve chosen to see her publications. So you just then click on her name to get rid of it. You don’t need it anymore. Right. And now that you have that there are different options obviously. And this is what I really look forward to discussing with the community to see, no matter what kind of menu system we use, we also need obviously decide on what’s going to be on here. So we now just open up. Cited by.

Frode Hegland: Yeah. Those two. It’s based on the fact that the top of each list says what it is. So this is related to Dene Grigar. That’s why these two columns on the right and left can be generated. And when you then click on specific things, you have further options for seeing connections or not, which often would become really messy. And let’s just hide them. And now we’re going back to putting Denny up there. And I decide to close these things, but I just wanted to show this is a bit clunky because Mark mentioned it during our meeting. Combined with other references is important. Of one of the things you should other references meaning other lists of documents. Because maybe all you need is a timeline. Maybe you want a different thing. So it’s not just about listing what we have, but it’s also about combining so you can do further things and that’s it. And then I just want to show you one last slide where on purpose I made a really messy view. I thought I had it. Here we go. So here are all the different rectangles are independent things that know what they are, can choose to connect to other things, and you can choose to leave some of them up and whatever. So of course we can go and do something much better than that kind of a simple menu. But that’s what we were discussing.

Speaker7: Cool. Yeah. No, that makes sense.

Brandel Zachernuk: As a general position on radial versus linear ones, things like that. One of the main benefits of a radial menu is its reflexive capacity for people to be able to know that they move down into the left and, and up into the right and things like that. And so the benefit of it is contingent on the stability of the menu items. If you don’t have a clear sense of which ones are going to go where, then it’s not possible for people to be able to kind of memorize those. And so I, I like them, they are useful. They were really useful in Maya. They are pretty useful in blender as well. But I also think that the main utility is the reflexive stability. It also depends on how reflexive people are expecting to be able to get within that thing. And it also means that there are presumptions with regard to accessibility and other things that that come from, from having the ability to use two dimensions. Two, two sort of independent dimensions to be able to consume what is effectively a linear list. So that’s, that’s one of the benefits of like that’s that’s where I think radial menus just actually are better. But that stability issue and the complexity of people sort of understanding that that’s what they need to do with it now is, is a little bit different. But you know, beyond that, like they both sound like really, really interesting things to pursue.

Brandel Zachernuk: I, I would love to have things be persistent, honestly, so that you have the ability to drag those things around. Programs like illustrator, Photoshop, and a number of other things actually have menus that can be torn off and kind of exist in a persistent way so that they’re exist so that their presence isn’t contingent on retaining focus. They they are by default, but you can kind of invoke them into a persistent mode. That means that you can have them as kind of ready to hand. And that’s something that I think that the additional space within XR would be really beneficial for.

Frode Hegland: So I agree on all points. Here’s a surprise. Maybe if we have perfect metadata, which we’re aiming to have because we’re faking it, this would also faking it, showing that this can be done if ACM produces the perfect metadata. So it’s not a fake as in a demo. In that sense we can also here really start using voice.

Speaker7: Right.

Frode Hegland: And I had on one of the slides that I kind of skipped past at the bottom of the screen, like this little thing we have. You should have voice cheat sheets, you know, so you can just write. These are the things you can say. So at a certain point, you will be sitting in your office at home and you’re saying, you know, show me everything with this and that. I just click, put in a link to Claude. Then you, I, you know, all of these things at some point will be plugged in. All of them will have I, we will figure out a way to put AI into this. And that’s when it gets amazing because we will have addressable entities. So when it says Dene Grigar. References. You know, that is really, really important. It becomes addressable. We can then say to the AI, take Dene Grigar references and Marc Anderson’s references and do this and that.

Speaker10: He.

Speaker7: An interesting. Go ahead.

Adam Wern: Go go go.

Speaker7: Okay, you.

Mark Anderson: Answer first because I’m looping out slightly, so.

Adam Wern: No, I just saw the cheat paper in the hand of zer hand that you have your context menu with the things you can click on and voice command secret in your hand. It’s not such a bad would be very good to always know what you can say in a limited system where you only have a and have a place to always see that so it’s, it’s fun to play with that notion that you have it on the wrist, on the hand or on the floor or wherever, perhaps on the object as well. But but somewhere that you can look and perhaps even click a persistent place. What I’m a bit against is the kind of putting a context. I hate that context menus. It’s good that they are close, but so often they cover the exact thing that I want to see, so they know they’re not perfect in that sense. So that’s what why I’m advocating of kind of using the corners of the object a bit more where where you could show previews and things kind of where you could branch off from that, that object. Of course we have that window decorations are exactly that. We don’t we have a kind of a, the X button in the corner, and we have expand in the corner. We have a move around handles in the corners, so we don’t have to kind of cover the object with the menu. So. Yeah.

Frode Hegland: Sorry, Mark, I’m gonna have to go back on these points a little bit. Number one agree the way it is currently, if you select the thing, it is the title of the context menu, so you don’t lose it because I completely agree with you. And they should be for a quick thing if we can put something, you select a thing and you get those corner things. Yes, absolutely. Agree with that. I’ve been working on that for decades. Agree with you. I’m afraid of overloading Andrew. I want him to make this functional before we start doing this. So if you Adam wants to start experiment with this, I couldn’t imagine anything better. And secondly, this is the really, really important point of my brain is having a problem right now, but it was so important on the voice commands. Right? So in Doug Engelbart NLS, which was not like DOS, you know, you typed in a logical command. It was much easier when you were typing in at any point, you could type in a question mark, and the whole screen would show you possible commands based on where you were in the hierarchy of what you were doing, which was phenomenally useful. When I’m hoping we can develop here is a voice partner. So that the thing you said about having in your hand. I like that very much, because your fingers can maybe be a sentence each. So when you’re speaking a command and you don’t know what to say, you know, maybe you literally look at your hand or maybe there is an avatar or whatever, right saying.

Speaker7: You’re doing.

Adam Wern: Gesture. Yeah, I don’t know. And the computer will tell me.

Frode Hegland: So for the Italian version.

Speaker7: Yeah.

Mark Anderson: Show me any answers.

Frode Hegland: So so very very powerfully interesting side project can be exactly that. How do we do voice and also how do we integrate I it’s like this cloud cloud thing and a few others. They don’t have proper APIs yet. And you know you basically have to just type to them. We can make it part of a smoother workflow. So super big mark, please pivot as much as you want. Sorry.

Speaker7: The thing is.

Mark Anderson: Sort of rattling through our brains to look at this is is that. Every time I think about the the sort of thinking and synthesis synthesis stage and the sort of things we talked about earlier where you’re, you know, you’re studying on a new project or a new paper or something, is that very rarely does it follow neat lines. So the sort of things that we like to put in our, our sort of demonstration pictures because essentially the countable things are very rarely useful. So a challenge in this is how you do the emergent labeling. And I don’t think that I don’t think it’s simple, but I think it’s something that’s we definitely want to keep a handle on, because part of being able to do that is how we engage with the plastic nature of the, you know, the reformable nature of the environment we’re creating and the ability to do this thing that I mentioned earlier, where because you’re not trying to draw a graph, the graph does not yet exist, which enrages people who like formal graphs. But you’re you’re you’re beginning to form relationships between things. And you may well do that, for instance, just spatially by having groups of things that are these are more like those, but, you know, they’re different. And we want to address those. The challenge is if if you don’t really know what that thing is, you it I think reality is you normally know what they aren’t. You know, they’re similar to this, but different from it.

Mark Anderson: And that’s fine because it works in here. We sort of handle that ambiguity quite well. The challenge is when we, we need to talk to our, our software that can’t play quite that game and needs, needs us to fess up and at least give it something tractable that it can use. So I think there’s something interesting to look at there at Brandel.

Speaker7: Yeah.

Brandel Zachernuk: I totally agree with that, Mark. And one of the things that so you know, person to the thing that I try to jam in the conversation last Wednesday about about what what looking is for. That it’s it’s to it’s to. Take actions to progress clarity, rather than something that can be kind of regarded as a holistic, overarching thing. There’s a guy I think Will at work who has a really interesting example of something that he’s doing for his own visualization and VR. I think I’ll be able to get to, to make it a sort of a public exemplar and use case within the year. But the thing he’s looking at is just disgustingly complicated. But what he’s doing is building a sense and an understanding of it. And so I think that a tool, any tool needs to give you robust capacities to apply high level controls over your view to be able to tweak it in ways that.

Speaker7: Just to get you toward that.

Brandel Zachernuk: And having a large number of sort of reflects speed of reflex tools or just, like you say, building the graph or manipulating and being able to scale those things are very, very essential part of of being able to make sense of these things because as you say, like I think the graph is complicated and managing the graph is the job and being able to curate a formation of it that has durability, but it is flexible to the view changes that you want to undertake. And but, but has the ability for you to undertake those changes as well is pretty hard but possible and I think really, really important as a goal to, to try to pursue.

Frode Hegland: There’s a lot of important issues coming up with this, and I’m glad that we’re getting to the point where. Andrew system is kind of usable because what he showed us by selecting a reference and having on the side the sentence that appears in the document is phenomenal. You know, he went further than expected. So as long as we have a system that is addressable and interactive, we can start doing these kinds of things. It’ll be interesting, I think, to be able to save views or environments or whatever we want to call it. You know, when I sometimes get into, oh, no, XR isn’t that special. It usually comes down to something like, I need to analyze these documents based on this, this and that. It’s like, for crying out loud, you can just write a report. It can be automatically done. That doesn’t have to be interactive. A lot of stuff doesn’t have to be interactive, but the interactive bit is you setting up the framework for how you want things to appear for you. You know, imagine getting the full proceedings of the hypertext conference in two years. And when you view them, all the articles have been analyzed based on your criteria, how they connect all these things. And it’s presented beautifully for you. And then you can interact with it. So I think we’re on the same page there. You shouldn’t have to do everything from scratch, basically.

Mark Anderson: I can say is I happen to be sitting in the same chair as a as a poor, benighted person who’s got to assemble all the papers for this year’s hypertext conference. If you have any good ideas on that, let me know. Because, I mean, I have the whip hand so I, I, I can put my elbow gently on the tiller without anyone noticing if there’s something we can usefully do.

Speaker7: Yeah.

Frode Hegland: No, exactly. I mean, look, it was up to me. We would be working on authoring and reading software for XR. You know, it is. Yeah, it’s a big one. So we’re actually having fish for dinner, so I have to be a bit careful about how much time I overrun. So for next week, Mark and Denae will look at the work, academic workflows and what kind of commands might be necessary. Adam, if you want to work on better menuing things. Couldn’t imagine anything better. And, Brandel, you know, whatever you want, because that’s you. We’re grateful that you were here.

Speaker7: And I just want to.

Frode Hegland: I just want to repeat the thing I said, right. I think it was right before you arrived. Brandel, this is really quite fun. It’s much more fun than I expected it to be, but much less stressful. So it’s really good having these discussions. I think also, I feel a bit more grounded because I’m doing reader and author, which are much more mundane, but they are also real. Author shall be released tomorrow, by the way, at least for review. Which is absolutely mad. It doesn’t do everything yet. It does some things. So for us to experiment here is great. Any other comments or things?

Speaker7: I just got.

Mark Anderson: A quick one, really, for Brenda. There’s an interesting observation of of Andrew’s so the demo today that he’s, that’s doing the selection of text. And I think links back to something you said about a year or two back about, you know, the glue that’s offered by HTML without getting too tied into formats. But he said that was done because he had the using the Acm’s own HTML versions of documents made it much significantly easier. So there’s there’s an interesting vindication there. In terms of how we make I mean, we can’t do that with everything today, but but I think there’s a really interesting vindication. I wouldn’t that I think I saw coming, but is nice to hear that, that we definitely want to push harder on having these structured documents. It’ll cause us different problems. I mean, the whole thing about, you know, what becomes a copy of record if people want to argue about, you know, what the copy was, which I think is which are non-trivial. But it is it is interesting to see now we’re now beginning to see the light of. Right. Well, we have this extra information. We can do so much more.

Brandel Zachernuk: Yeah, yeah. Well, I I, I won’t play the false modesty of saying that. It’s a surprise to hear that. Something that I was saying about it years ago was right. No, I agree, I think that, you know, especially also like, you know, generative AI, for all its flaws, is a is a structure, is a system for being able to impose, like best guess structure over the hitherto sort of unstructured data as well. So I think like that, like existing robustness within documents combined with a reasonably well behaved generative AI over the over, that sort of corpus is going to be a really, really constructive thing to be able to pull these things into shapes that we want and wield them for the purposes that we have. And and I’m very, very excited about the combinations that those things you know, constitute together.

Adam Wern: And it will be so interesting. Interesting to do the voice interfaces, kind of voice interfaces with a generative AI kind of filling in the forms. So it has very well defined boundaries to fill in. I think that’s a very fruitful my local play, and that you can do that in a web browser right now, as this moment is absolutely mind blowing that you can run so much in the web browser. Of course, it takes some battery for the for a headset, but but everything will just get better. And of course you can offload it and do it on a separate computer if you want to. But it’s nice to be fully in web land because you could do so many experiments quickly. I’ve been trying to get back fill the quest kind of voice commands to quest. But it’s a bit hard to get all that performance. I’ve been playing away the Wasm, the WebAssembly. Whisper models. And then I went to a web GPU model. Models of whisper. And they kind of. I can’t get that on quest, but it works on in a regular browser tab, and it’s so quick and so and good enough for backfilling it.

Speaker7: Well, if you if you’ve got it working, then.

Brandel Zachernuk: You can just wear your AirPods and then socket connection to your to to your device as well. But yeah. Yeah.

Adam Wern: So so much plumbing and it’s kind of it’s still like plumbing that is done. It’s not polished. So the the whisper models need to be tweaked. And and the memory issues are, are issues. There are lots of moving parts here to and lots of plumbing to do just what the Apple Watch version can do natively. But it’s still very fun to play with it and to, to make it respond to web commands or voice commands in the web browser is I think that is where we should go in terms of interfaces. There is so much to a whole new landscape, or especially just responding to one command is one thing, but having conversational voice interfaces where or multimodal voice interfaces really where the where the challenge, but also the opportunity is that that it knows that I’m staring at these three things or have my attention in this direction, that I’ve just selected a thing and deselected it. It’s kind of a light history of what I’ve done before, before and and so whenever I’m a bit loose with my language, the LM could still catch a few kind of this and that words.

Speaker7: Yeah. Yeah, yeah.

Brandel Zachernuk: The two, the two. The exciting and challenging things are the multimodal so that there’s put that there and then there’s also multi-term so that you have that persistence across and you can have the ability to refine those. I don’t know that the. The book is closed on the right, or the best or even good way to do those things. I’ve done them arduously by hand in the past, and it’s it’s powerful. But I don’t know that I would I wish it on anybody to maintain. So yeah, I would be thrilled to find out what you come up with.

Speaker7: So. Yeah.

Frode Hegland: Yeah, I gotta go. I do think there is something in the universe is doing by having AI and XR maturing at the same time. And I think we can definitely help people get control over AI by having a richer experience like this. So this is a huge topic. We’re not doing Fridays anymore in the same level of organized as before, but if there is any other interest in a theme like this or metadata or infrastructures, AI, XR, whatever, you know, just propose a time. I’m very happy to spend more time on this. It is all recorded, it is all useful and it’s all quite wonderful.

Speaker7: So I’ll see you on Monday.

Frode Hegland: Unless Unless other.

Adam Wern: Yeah. You could just stop the recording if you’d like to. You don’t have to. I’ve got a few more. Yeah, I’ve got a few more minutes.

Mark Anderson: I’ve got to go to actually.

Adam Wern: Okay, okay, okay.

Frode Hegland: See you later, Mark. Yeah. I mean, if if you and Adam want to talk for a bit, that’s absolutely fine. I can leave it running. But is it? I can stop the.

Speaker7: Hang on.

Frode Hegland: Just texting Emily.

Adam Wern: Or the vicious cold.

Frode Hegland: I got a notification here. My watch that the joule, the sous vide, is finished with a fish. So that was kind of cool. I’m upstairs. Of course.

Brandel Zachernuk: That’s awesome. Yeah. I’ve got I’ve got ten more minutes I can until my next one, but. Yeah. Cool.

Frode Hegland: All right. Okay, I’ll leave you guys talk. I’ll leave the recording on if you don’t mind, because you’re amazing. Let’s capture.

Speaker7: It.

Adam Wern: I prefer to just talk without recording all the time.

Leave a comment

Your email address will not be published. Required fields are marked *