29 May 2024
Video: https://youtu.be/tSF5gR76ax4
Frode Hegland: Oh. Hello, everyone. Better. Yeah, I can see you. Look, it’s cutting up for some reason. Okay. I have no idea where Adam is going. The mystery of the age. Someone’s posted a chat. Of. So, Mark, we’ll see you on Friday. So, Fabian. We’ll see you on Friday. And we’ll say sport is valid.
Fabien Benetou: I had to check this morning because sometimes, you know, they want the passport to be like six months ahead or something, right? But here I think it’s just the duration of the trip, so no stress.
Frode Hegland: That’s. Oh, yeah, well that’s good. Otherwise, we’d have to take you in as a political asylum seeker.
Fabien Benetou: Yeah, well, let’s not try even.
Frode Hegland: That’s not even talk about that, right? I had a brief meeting with Dana yesterday. Oh, here she is. Fantastic. Hey, Deanie.
Dene Grigar: Good morning.
Frode Hegland: Look at my cold background. It’s like interactive 3D. Look, if I do this, it’ll manage to be annoying in a completely different way.
Dene Grigar: I hear your birthday is. This is today. Maybe. Adam your birthday. Yes. When’s your.
Frode Hegland: Birthday? Any such thing about what’s happening on Friday? No. Then the only thing about Friday.
Dene Grigar: And Fiona’s birthdays this weekend too. So happy birthday, everybody.
Frode Hegland: Danny. That is top secret government level. Oh, for crying out loud.
Dene Grigar: Us. You asked me how are my lab functions? And I told you that we do lots of real personal things to celebrate each other. And this is one of them. I’m bringing it to our lab here. The humanizing part of working with human beings. To be honest.
Frode Hegland: To be honest, I’m just glad you’re wearing so much color today. Literally brightening the room. Yeah. I just just as you came in, I just mentioned that we went through a kind of a scenario thing together yesterday, so I forgot to do a agenda for today because this guy here arrived at eight this morning. But what we do have to cover today, unless there are any other announcements, is Fabian would like to do some demonstrations. We’d like to go through that scenario that Dana now has. And then it’s Andrew time, of course. So yeah. It’s exciting. What should we start with, Danny? I’ll make an agenda right now. Lisa, we have something. I apologize again.
Dene Grigar: I like the birthday cakes falling all over my chat. Thank you. Fabian.
Speaker5: Toot toot toot. That’s all. Yeah. That’s not.
Frode Hegland: Necessary today.
Speaker1: The meanwhile, continue. Tell them your tell about your meeting.
Frode Hegland: Yeah, I’m going to post this right now, actually.
Dene Grigar: So my my lab has. About what? How many people do we have right now, Andrew. About seven. Eight. Yeah. So we have 7 or 8 people that meet and we’re on Wednesdays. We meet in person in the lab, which is nice, most of us. But a lot of our folks live out of town, so they come in from out, out of the area. But we meet we, we talk about, like, how was your weekend? How’s your kids? You know we know the names of each other’s kids and cats and dogs and. Andrew and I and James and are going to an event where they’re helping me with an exhibition in Victoria, BC in a week and a half. So we traveled together. Anyway, we’re I think we’re kind of close. In a in a professional way.
Frode Hegland: Just as you said that Adam, for some reason, disappeared. So there you go.
Speaker5: Hopefully.
Dene Grigar: At the heart of us. It’s so interesting. We’re all kind of geeky, nerdy. I’m probably the most outgoing one. Maybe Holly and I are the most outgoing. You think, Andrew? Yeah. But yeah, it’s a, it’s a much more it’s a different environment than this one. I think we’re I don’t know.
Frode Hegland: Well, it doesn’t.
Speaker5: Help big.
Dene Grigar: Time. That’s all I can say.
Frode Hegland: It doesn’t help being so virtual. I do agree with that strongly. So we’re going to try to organize another summer social. So who can come to that will be great. And of course there is September and October.
Speaker5: November.
Dene Grigar: November. November.
Frode Hegland: So I put together an agenda. I went to great lengths. So the link is there.
Frode Hegland: And Yeah, the first item. Yeah. I just wanted to mention really, really briefly, the reader is supposed to be finished today and we’ve done the semantic highlighting. It works now. It’s the semantic view that we should get. Yep.
Dene Grigar: Are you going to submit a paper? Or have you decided that my paper is the only one?
Frode Hegland: No, I am going to submit a paper. I just had other things. I’ll get something done in the next few days. I mean, I haven’t, but I’ll get it in a form where I can share it and we can put things in there.
Dene Grigar: All right. Thanks.
Frode Hegland: I’m. I’m still student material, you know, meaning last minute, please. So, yeah, I apologize. I don’t want to cause any stress, but the discussions we’ve had recently on what things are in the space I find really relevant to that. So hopefully we will get a stage longer with that today, to the point where it’s easier to finish the righty bit.
Frode Hegland: Right. So what I was saying about reader is you can now in the test version press G for grapes, I for interesting things like this that have semantic meaning, it will assign a color to that text. So Mark, this should have made it easier when you went through my thesis, for instance, because some of them are language or grammar. So you don’t have to explain every time what the issue is. Explain. Sometimes the second thing they’re trying to do now is when you’re in the outline view, you should be able to see all the annotated text in that outline view, or only specific annotated texts such as everything that is important, nothing else. So that carries over to the kind of logic we’re thinking about. That’s why I wanted to mention it. And Andrew, please start talking about the. Oh, yeah. Anybody else have any other announcements?
Speaker5: Other than watch your house party this weekend.
Dene Grigar: How many? Who’s all coming in besides an Adam? It’s going to be Bobby and Mark.
Frode Hegland: And Leon.
Dene Grigar: And Leon.
Speaker5: Okay, I can only make.
Mark Anderson: It up for Friday, but Yeah.
Frode Hegland: Yeah. You’re planning to drive to Wimbledon and train with us then, right?
Mark Anderson: I know I’ll train up to Wimbledon, and then I’ll train back.
Speaker5: Which I think. Oh.
Mark Anderson: Actually, no, I that’s the point. I could do it anyway. Well, we we’ll talk offline because that’ll bore everyone about my travel arrangements.
Frode Hegland: Okay. So, Andrew, I will now Copy the link across for for testing. This is weird. Sorry, I’m just in slack. But. There it is. Okay. Yeah. Please start presenting and I’ll make the page ready. It took a significantly longer to get from town back out today, so I really apologize for things not having been done. We literally came in the door with two minutes.
Andrew Thompson: No, it’s it’s a good timing because this update this week is pretty light. I spent a bit of time working on, like, some fancy stuff, and then, like, I spent the other part of the work week trying to develop some of the map interactions. But the map’s not ready yet, so I just sort of turned it off for this update. And pushed what I had. So the main thing you’ll notice are sort of document interaction upgrades. Before they would just sort of generate, you could see what they were doing. So on a slower internet connection, or if you’re loading a large document, you can see the blocks of text sort of like popping in as they’re populated, and then the whole document snaps in and becomes what we see. In the normal read view. And, you know, yeah, that’s not great. Kind of looks unfinished. So I added a sort of loading animation. You don’t see the document until it’s fully loaded. Instead, there’s a little, like, spinning icon. Once it’s done loading, it’ll then appear with a nice little animation of it sort of expanding. It feels a lot more polished with just some simple UI stuff. And then I also added a close button because I realized we end up cluttering the space pretty quick, and we’d always want to get get rid of things. So I think my solution for the close button was pretty subtle. It’s just a tiny little dot in the top corner, but when you hover over it, it then expands into a proper icon. And that does the same thing. Closes the document with a little animation. That’s pretty much it. That’s that’s all it’s in this update. Unless I forgot to turn something else off and there’s, like, a broken map code, but we’ll see. Shouldn’t be there.
Dene Grigar: Thank you. Yay!
Andrew Thompson: Nice, quick and easy.
Mark Anderson: A quick question. Just looking at the the Texlab site is I’m just trying to find this a 22nd May link. Does that point does does what’s at the end of that now include what you’ve just done or is there a different endpoint we should be looking at?
Andrew Thompson: I can pull up the the base camp link here. Usually the base camp content is in. Got it. Yeah.
Frode Hegland: Just. Yeah. Sorry for interrupting, but. Yeah. Just reload. I’m sorry.
Mark Anderson: Okay. Got it. Yeah. Cool. No. That’s great. Yeah. The one thing I must say, I wish there was a magic thing that would allow me to basically tell my web browser on my computer to just tell Oculus to just open this damn page so you’re not faffing around. So a headset on, headset off, glasses on, glasses off to get from screen space to to thing. But that’s not that’s not your problem. That’s our collective problem.
Frode Hegland: Well, I mean, that’s why I should have updated the page, obviously. So
Speaker5: Sure, sure.
Frode Hegland: To get around it. And one thing that.
Andrew Thompson: We could do, if people are interested in this, I could have two different links, like one for the archived test version. And then I could have, like, a current testing folder that I just update every week. So you always go to the same link every time. And the space updates that might be useful for people seeing thumbs up for Mark.
Speaker5: No, I.
Mark Anderson: Think that makes sense. And I think we’re all grown up enough to know that if we go to the latest thing and it just for some reason doesn’t work, we can, you know, there are other ways into it. But as a it’s this thing of just cutting down the time it takes to get to the thing that you’re looking at which is, is, you know, is partly the experience of using the tool. But if you don’t have lots of stuff, sort of easily lodged in the system, then then that could be useful. Fabian.
Fabien Benetou: Yeah, just a quick trick I posted in the chat, but if you are on the same Wi-Fi as your headset, the HMD link web page works quite well. And if you append the the URL after the HMD link question mark then it will directly open with that URL. And if basically what I have on my headset is HMD link as a bookmark on the desktop, I open that, paste it open on the headset, and it solves most of the problems. You can also use the there is an app from Oculus for doing this kind of things, but I don’t want to use it if I don’t really have to. So yeah, that’s my recommendation. Hmd link, because you need the back and forth is really painful. And the what what Frode already did with with the future lab like the short URL helps. But overall HMD link to me solves like 99% of the those back and forth of opening all the new content.
Frode Hegland: I’m just trying to simplify on the page here. Just one second. Trust me, I’m not on Facebook. And if I was, I wouldn’t tell you.
Speaker5: But I would.
Frode Hegland: Yeah, you would know. You would know.
Speaker5: So sorry. Almost done.
Mark Anderson: Big brother is always watching.
Frode Hegland: Big Swedish brother. That sounds like some kind of a dance anthem, doesn’t it? Okay, so what I’ve done on the front page now, I think this should solve some of our issue. If you please go to future Texlab info and just reload that page.
Speaker5: On the.
Frode Hegland: Options of links. It goes straight to our current demos, so it should be if you forget that, your browser will remember it easily, right? Yay! Everyone says I hope.
Dene Grigar: I’m trying to figure I just went there. What am I supposed to be looking at.
Frode Hegland: On the future texlab info slash? Nothing. Just the main page. Yeah. If you if you please reload that in the bullet point menu list. The bottom item now is current XR experiments. Yeah, that goes straight through the list of our experiences. Experiments, so it should be when you run the headset, it should be really quick to click through. And that’s the one that other than when a Swede makes me take too much time coming from town. Should be updated every every Wednesday. Robin, do you have your hand up or do you just want a high five?
Fabien Benetou: Now it’s an old hand, but we can always have five. Okay.
Speaker5: Hang on. Indeed. Right.
Frode Hegland: So, following the agenda. Fabian, you mentioned something on Twitter about something or other and you suggested maybe presented so we could better understand it. Now would be a good time.
Fabien Benetou: Yeah. So let me try the usual.
Frode Hegland: Oh, while you’re setting it up, I just want to mention to the group, particularly Denae and Andrew, that Fabian is bringing his Vision Pro headsets. So we will have two in house for the weekend. So if you guys want to call in and do one of the special meeting things, we we try to brief test today. It’s really interesting to do it in group and really talk about it while we’re doing it.
Dene Grigar: I would love to do that. I’ve been wanting to do that for quite a while, so let me know when you want to.
Fabien Benetou: That actually makes me curious, because did anybody do more than a call with two people at once?
Dene Grigar: No, there’ll be this. The time will be three. It’d be great.
Fabien Benetou: Does it work, though? I have no idea. Five people.
Frode Hegland: Five yourself plus four others.
Fabien Benetou: Well, plus, now you kind of put me on the spot. I said I would do my best to try to bring it. And now you’re saying on the record I’m bringing it, so I guess I have to for research.
Speaker5: He volunteered.
Frode Hegland: No, sorry.
Speaker5: Yeah.
Frode Hegland: That’s the second time I put you on the spot. Next time, you can shoot me.
Fabien Benetou: And on the record. Great. So, do you.
Dene Grigar: See my grants over? No. Shooting to the grant is over.
Fabien Benetou: Okay.
Speaker5: Fair enough.
Frode Hegland: Is that some sort of a Texas rule? I guess it makes sense.
Dene Grigar: No, nothing Texan about that. They would say shoot and shoot without looking. Shoot. No. Kill. Shoot. Make sure you kill.
Speaker5: Yes. Oh, can you see the screen?
Frode Hegland: I can, and your cubes have got cool new corners. Please do tell us what it’s happening here.
Fabien Benetou: So I’ll press play. I read you a bit what I did last time. Namely, that you can pinch around to travel in space. Which is kind of new, because until now, I could travel in space but not bring the, like, executional environment. But what’s I don’t want to overshare, but I have a bunch of ideas when I wake up or go to sleep. Which is also why I nap when I can. And that happened at the special moment in between those two states. And I thought, oh, but if I could teleport around, I could also teleport vertically, which I did in the past, but wouldn’t be fun if the teleporter itself could be moved around, which is what I did. So this little box now can be I can teleport on it, but then I can bring it somewhere else, look at it and teleport again, and then I can go basically wherever I want. But in places that were basically not reachable until now. So I think in itself that’s interesting, but that’s not really what I want to highlight the most. What I want to highlight the most is it took literally one word like I, I added the target component, which is what allows me to grab things and move them around. And thus, because of composability, I had this totally new behavior. And I also want to argue why, in this context about the future of text, it matters is because until now, my future of text documents, code, etc. was mostly on the horizontal plane and not really vertical. And now it can be vertical like I can have, because I showed the other day a Lego scaffolding and a scaffolding tend to be vertical. So now I can think of this kind of tool, composable tool as something vertical. And I think it’s yeah, if we can freely explore the volume, move it around. I think it’s it’s quite exciting. But again, with just one word, just because I brought together an object which was still portable and then give it the ability to be moved around. What?
Speaker1: And I think having such anchors within the document, we have that in HTML, that document anchors, that links within the document. Having that in, yeah, visible links within the viewport or viewpoints or vantage points within the space is something I think will be very. Important for for larger collections that you can get to not just a specific document, but also a specific, specific view over many documents. So I think it will serve a dual purpose, both for transportation but also for kind of crafting a nice view, an overview of a collection of documents. And you can have multiples. And as we said before, they could be like in a museum with an attached audio guide in the in that we have a small little ghost, we call it ghost at some point or something. But even a gestural figure with hands and and voice that could point to things from a specific vantage point that could be an interesting way of doing. Many exhibitions where the space and document blend together in an interesting way. I don’t know if it’s for Sloan, but it’s for for the future of virtual documents at least.
Fabien Benetou: One. One quick remark on the well, two quick remarks, the first one being it’s indeed something I did not show there on this example. But if you add hashtag teleporter ID at the end of the URL instead of starting from, let’s say, the center of the virtual room, you start from where that teleporter is. So that means if somebody gives you the URL with that specific teleporter viewpoint you start from there. So it’s like the position is embedded in the URL. And I think it’s it’s how we do it indeed, in documents right now. Like, for example, if I copy paste the URL of something I’ve read at work and I want to keep on reading it at home but I went already halfway through. Or I send a specific paragraph to someone. I think there are good ways to navigate through document from web browsing we can learn from and and that also, I think one of the competitive advantage of webXR like, yes, we know some of the cost, for example, some features being a bit late or some performance issue that we hope are going to catch up soon. But they also advantages like I hope this namely anchors and that we can directly use.
Fabien Benetou: And for example, if I go back to the MD link suggestion for Mark earlier, if one were to use this, then you share that link with the anchor, the hashtag at the end, and you start from that viewpoint. For now, it’s only the viewpoint, not the orientation, but one can imagine playing with that too. And and to go back on the second point from Adam was there is a game I’m being hooked on at the moment, Elden Ring. And one of the mechanism is that you can put some graphs on the in the, in the virtual world so you can say, oh, there is a bad guy over there, for example. And also you have like, ghost like behavior, people pointing or dying a certain way. And then you can learn from that. And I think indeed that’s that’s definitely something that we can use. It sounds honestly when you say it like this very video game ish, it not necessarily exciting for academic research, but I think there is at least something to learn from it. And I think it’s let’s say the basic of education is you point at something like joint look at and helping I think a lot.
Speaker5: Yeah. Adam.
Mark Anderson: Do you have a report or do you have a report to that before I come in?
Speaker5: I’m.
Frode Hegland: Okay waiting, but was that on just what you said? Yeah.
Speaker1: One thing that it will help that kind of speaking of ghosts and the anchor points and the spatial anchor points or places you could go fixed viewpoints is in multiplayer, where which we are going to do at some point. The I’ve noticed in virtual worlds, even in more information, dense worlds, presentations, so on. People teleport and move around and you don’t know where they went. So having some sort of ghost trail where you could see that someone teleported from this direction to another place would be very useful to see where. So you could follow along if you have some sort of guide guiding you through a document, you want to see where they came from and where they went at some point. Can we call.
Frode Hegland: It a light trail?
Speaker1: Light trail would be better than ghost. Ghost could be a bit spooky. Yeah. Over.
Speaker5: I got ghost.
Frode Hegland: And Murder Wolf. That works.
Speaker1: No to drop your murderball.
Frode Hegland: The marketplace.
Speaker1: Writer’s room.
Mark Anderson: So this question comes out of ignorance, not whether. Because this is a hidden thing behind it. I really like this idea of, in a sense, having the choice of, you know, either either you you move the environment in a sense around you or you, you, you leave the environment as it is, but you move within it. I like that flexibility. I just wondered, does that pose any issues for the way the environment is constructed or the objects? I don’t know. But if it doesn’t, that’s great. And if it does, I hope we find a way around it. It’s a really interesting idea, so thank for that.
Fabien Benetou: Well, some of the challenge it poses is it’s like if you have a hierarchy of objects, like a scene, if you put everything, let’s say on the first level, everything is not has the single scene. As a parent, it’s pretty trivial. But then when you start to have transformations, scale, rotation, etc., it can become a little bit hairy. Which is why teleportation in my environment did not work, because basically I was literally leaving my virtual hands behind. So I was teleporting there. But then I shook my hands and they were like ten meters back. Just kind of fun, but a little bit strange. And then if I didn’t bring the hand, then the nodes and the code was not there. And that was the problem, that I wasn’t properly managing the hierarchy of objects. And now I’ve kind of do, but not properly. So some of the transformation, like scaling doesn’t work, which is why initially when I put the multiple cubes on the hands, some of them were too big or too small. But but I don’t think it’s just because I’m doing it quickly. I don’t think there is anything particularly complex if done right. And I think if you it is important, I think, to do it right, because, for example, if you manage the hierarchy properly, you can scale or move only part of the world. And then instead of grouping and if you have a mental model of the hierarchy in that document or that scene then instead of like doing a lasso circling and if you know, this is like a tree of object, you take the parent from those objects and you move this or you transform that, then it’s it’s quite a powerful way. So I think if it can be done, I don’t do it properly to be very honest, properly enough to work and do the demo. But it would have to be refactored and not properly at some point. But yeah, it’s at least enough in, in my opinion, right now to open up some ideas and do some manipulation that are not fake, but that, that really.
Speaker5: Peter.
Peter Wasilko: Yeah, I was wondering, do we know if there’s any difference between vertical translation of a person’s viewpoint versus. Horizontal plane translations in terms of inducing sim sickness, would you be better if you have your movement happening in the vertical plane versus moving sideways and forward and backwards and Also, what do you think would be more comfortable for the viewer? Let’s say that we have a very large body of text. We decide we don’t want to go with the paginated model. Instead, let’s just format it and wrap it at lines of, say, 50 characters wide. And it might wind up working out to be 12ft tall for the entire document. If rendered at whatever font size we choose in the virtual world, would it be better for us to move up and down over those 12ft or for. The panel that the text is written on to move up and down over the 12ft. And is it better if it jumps versus a smooth scroll? Any thoughts?
Speaker5: Yeah, I.
Frode Hegland: Have some thoughts on that. Because of the Bob horn mural, that test that Randall did for us. When you move in the environment with a joystick or something, it to me, it became nauseous. But the thing of you move the environment yourself, so you pinch to move the entire mural. I never felt motion sickness. Similar is soft space where what you do, you do a grab motion and you can move everything. And that because you have. It’s very strange how we’re wired. So I can’t answer the up or down or sideways question. I understand that’s what you really asked, but in terms of moving ourselves in space versus moving the space is very different. But I think also with Fabian was talking about was jumping, which is a lot less motion sickness and inducing than than anything else. And I see that the hand behind me here, but the the other point was Mark, I think it was close to what you talked about. We talked about how much stuff coordinates for things in the space to have coordinates for yourself in the space was kind of briefly touched upon in the beginning, but to have that be a more important thing, you know, we talk about having knowledge as sculptures to be able to jump around the sculpture, maybe these locations, Fabian. And obviously there can be live in you throw the view camera to call the teleportation device that that may be also be fantastic to have here as a knowledge sculpture. And here are the viewpoints and they suggest different things. And then we need to determine whether it’s best to move the thing or ourselves.
Speaker5: You understand.
Speaker1: And I have some personal observations for for trying out especially work with camera pass through on request and and lots of text. For me. It worked much better to lay out material horizontally, basically at eye level and more like whiteboards all over the place, instead of having text disappearing through the ceiling or floor up and down. So that’s something to consider if you have material that sticks out goes in vertical direction in in an R mode, I think it’s better to send it upwards if it’s for me personally, it felt better to have lots of material in the sky than having it, blocking physical furniture and being a potential risk or cluttered the feet. So that’s some observations from my me trying lots of text in R specifically for VR. My observation is that I’ve been trying, at least with the kids flying around. Is something they grasped within perhaps ten or 20s. Not minutes, not hours flying around. In Mozilla hubs, for example, was picked the never flown in 3D before. And in 20s everyone was flying and they’re like flying up and down. And the. So it’s a potential kind of movement in, in VR that that could be could be used. But we still need to there are many tricks and I don’t know all of them in how you handle handle peripheral vision objects. They can both both induce and reduce this kind of VR sickness. And there are many tricks that we could look into if we ever go in to some sort of transportation in space like that. Maybe Fabian knows more about it, or there are some tunnel vision things where you block off some of the that’s one, but also kind of certain colors and certain particles you have in your peripheral vision can help out with VR sickness as well.
Fabien Benetou: So it’s a lot of different factors. The biggest one, in my opinion, experience has been basically the number of frames per second and world design. So basically, if there is any lag and motion at the same time, even if you have a lot of experience, it’s going to be pretty terrible. Luckily, or rather, I guess because meta, who invested so much money on it, like we can’t trust developers and world designers, so we’re going to force a way to make it at least kind of feel like it’s that 90 fps, or at least a reasonable minimum frame rate. So this is mostly solved. And then I think from that it’s a lot of well, it’s still experience design. For example, if you move around from your own volition, you can handle so much more if you like, for example, move in the 3D space with your legs or even what I showed here with the cube that you look at if you get thrown into random. How do you say like a cube in space five meters above you and you haven’t decided, like, it happens to you? It takes you a couple of seconds to get comfortable, and and you can get uncomfortable because, for example, there is no it looks like a cliff, basically. So there is no floor. And even though, you know, there is a floor, it’s really strange.
Fabien Benetou: Whereas if you make the decision. So I think volition or agency has a huge impact. And then it means you can explore crazier things like flipping the world around, like, literally you, you rotate the world. And if it’s something that you that happens to you, you might even fall and puke at the same time. I really it’s a bad place to be in. But if you start to rotate it yourself then you can handle a lot more. And then there are a couple of, like, design tricks that you add a platform, and even if you teleport around then it feels you’re still on the ground, so it’s not a problem. I think Adam also started to hint at this. You can, like, blink. So you turn off the environment, you turn it. Everything is black. Basically, for half a second, it’s really not much, but long enough to trick your brain to think, oh, I’ve been thrown or teleported or whatnot. I don’t know what exactly happens to the mind, but somehow you accept. For example, if I close my eyes and I turn my chair, it’s not a big deal. It’s. Whereas if I do, I might get nauseous. I mean, there are so many little things like this, but but I would say a lot of it.
Fabien Benetou: My let’s say the big picture is I think the mind is a lot more flexible, and we’re just not really trained to give ourselves the freedom to do so. So I would, I would tend to explore a lot crazier things. And when I did the first experiment there with moving the teleporter around, looking at it, my first reflex was, I need to add a platform, add it, because it’s going to feel weird. It’s going to feel insecure. And I said, okay, I’m going to try a new way to lazy for this. And because I did the action itself of teleporting, then I did not need like to have a platform under which normally is what’s suggested. So I think it’s a lot mainly about agency. And again, as long as you have like the minimum specs in term of frame per seconds and I guess readability of the environment, visibility, maybe even trust in it. For example, if the, if your software solution is buggy and then you get teleported or thrown at then maybe you think it’s not the proper behavior and then you always like on guard a bit like when you read a grammatical or your a typo in a text and you always, like on the lookout for more. So I think it’s the intersection of all this.
Frode Hegland: Yeah. Thank you. Fabian. Mark.
Mark Anderson: Yeah, I assume that in a sense the motion sickness in in in VR is essentially no different to, you know, sea sickness or air sickness. I mean, because it’s partly the different sensory inputs not agreeing. So I mean, it’s like if you stand on the on the deck of a ship and see the ship going up and down, you’ll far less likely to feel seasick than going and standing inside while your ears are saying you’re going up and down your eyes saying you’re not, and your stomach, you know, reacts accordingly. But I was so, in a sense, reducing that sense of disconnect. And some of the things have already been mentioned seem to speak to that, because in a sense, that’s what you’re doing. You’re giving your brain more time to catch up. I think in I think from my limited experience of sort of VR, XR is also that if things are quite low res now, whether that’s low frame rate or just a bit fuzzy, it’s your eyes and brain are working a lot harder. Your visual bit of your brain is basically sort of trying to turn it into something, you know, crisper. It’s trying to understand it. It’s a bit fuzzy. And I’ve definitely felt strain from, from that.
Mark Anderson: So not sickness, but it, it definitely can be headache inducing. And the last point to make is something that and I don’t know if it’s pertinent, but one thing I’ve noticed, if you watch people, they do films of people doing aerobatic flying. And one thing really interesting you see is people are often move their heads before they move the airplane. And I think that’s again, this thing about it’s agency. If you know you’re going to do something and then something weird happens, it’s not weird because you knew it was going to happen. And I assume the reason you probably looked to the horizon, you probably looked something like the horizon, like a point, because you then know there’s going to be a rotation around it if you’re just sort of whirling yourself around with no point of reference. I suspect that’s more difficult, but I think it just feeds into the same thing that I think whilst any, any individual of us will be more or less prone to some sense of, of sort of of, of sickness, from, from the moving, I don’t think, I don’t think it has to be an expectation. And the methods that people have mentioned, I think will go well to reducing that. I think it’s.
Dene Grigar: Response to that. Yeah.
Mark Anderson: Yeah. Sure, sure. Yeah.
Dene Grigar: Well, I, I mean, I never get sick inside of a headset or when I’m on a boat or when in my car or what. Why, what causes that? I mean, why some people and not others? I, I absolutely do not feel any.
Mark Anderson: I think it’s just the resilience.
Dene Grigar: I can play for hours.
Mark Anderson: But but but but I mean, I, you know, I mean, it’s enough time at sea, for instance, to know that, you know, anyone could get seasick. It gets rough enough kind of thing. So there’s there’s always a degree of it’s more than one factor. But the primary thing I think tends to be one’s individual resilience. And I don’t mean that in a sort of judgmental way to differing input from different different senses, primarily in the physical world in terms of air and sea. It’s to do with eyes and eyes and ears. Not, not not getting the same not getting the same sense, the same sensory input or a sensory input that gives them the same result in the brain. Peter.
Peter Wasilko: Okay. In hyperreality, Curtis Hickman posited a theory as to what causes you to get nauseous and that it was an evolutionary response because the most likely thing as man was evolving to cause him to get. That kind of a sensation where his eyes weren’t syncing up with what his vestibular system was reporting, as far as motion goes, would be if you’d eaten something poisonous. Therefore, if your body sees those kinds of sensory inputs, it concludes that you have poison in your stomach and makes you want to puke your guts out in order to clear your body of the poisonous plant that you ate by accident. So that was a theory there. I also highly recommend he has an entire chapter devoted to movement in VR experiences based upon an experience that his company, The Void, where they were simulating traveling on an anti-grav skiff for a Star Wars experience. And what they found was that if they represented the motion as having a very rapid short duration acceleration at the beginning and the start with constant motion in the middle. Your brain wouldn’t have a chance to pick up on the fact that you weren’t really accelerating during the acceleration phase. It would then see the constant motion and assume that you were basically in a moving vehicle, like a car traveling at a constant speed, that there was no acceleration, so that wouldn’t phase you. Then they had the problem that. The Star Wars property. People said, well, we want the experience to feel bigger, but we don’t want it to take any more time.
Peter Wasilko: So they wanted to make it were basically we were traveling to a distant outpost based through a desert, and you could see the base in the far distance, and they wanted the volume of the space to seem much bigger than it actually was now, because in order to avoid puking, they wanted to keep your current travel at a constant rate of speed. They had the problem of how can we reduce the time that it takes to get there? Then they leveraged the psychological phenomenon of change blindness to very briefly distract you from a physical jump in location. So what they did was they had basically a flying saucer swooped down and block your view of the base that you were traveling towards, and then they would jump you one third of the distance, having all of the surrounding objects, if you were looking a little bit to the side, also jump at that exact same rate of speed. And if you were looking in the reverse direction, they could do something similar by distracting you slightly and then having the item in the distance appear to sort of jump to be more distant. But as long as you were distracted at the instant of the teleportation jump, your brain was completely oblivious to the fact that the surroundings had moved. You just have to be visually distracted so you’re not focused on the item whose relative size is entering the big. Jump there and new way very effectively.
Speaker1: Sending a flying saucer and change the page number. Then for PDFs, suddenly you read.
Peter Wasilko: It could just be, you know, maybe the intelligent agent could pop up for a second and you’d have a little AI face blocking your view of the sheets while your location in the room changed slightly, and then that message screen disappears and you’re in the new location. But you would have thought that you were traveling at a constant speed, even though you weren’t. So that’s how they can sort of mess with time. But I strongly, strongly recommend Curtis Hickmann’s book on hyperreality. He has a whole chapter, and he goes at length with lots of nice footnotes into all those psychological effects and how they were able to play games with them that they could leverage gain. And they found there was like certain percentages that are in the book describing how much you can stretch the relative distance before a person notices that those distortions were taking place. As long as you stay below those parameters. And of course, it depends on whether you’re focused on a task or not. If you’re not focused on anything other than watching where you’re going, then the amount that they can play with it goes down dramatically. But if you’re distracted by something, then what they can get away with, and shifting your perspective before your limbic system kicks in and makes you start to feel nauseous goes up.
Speaker5: Yeah.
Frode Hegland: So on that but just briefly, because the March comment further up about the liquid mode. Yeah, I looked into it, Mark, and it’s basically, I would say Adobe admitting there’s no there’s a structural problem with PDF. However in reader, I’ve been playing with a flow mode that essentially does that. All I’ve asked the ChatGPT for is to take, you know, do a little bit of extracting. And what’s fascinating is that ligatures are fixed. I’ve had the guys working on ligatures from PDFs, and we can’t figure out a way to cleanly guess what the word is. So that’s a useful, useful thing to have. Now with the Sloan work, we’re not going to be moving at all. You know, it’s swivel chair based. So that’s a very important thing. And I think one of the key things that Fabian was presenting is that you’re not moving, you’re teleporting. So that that UFO in front of you is that moment you’re literally not moving. And that is a lot less. But to answer an earlier question about what kind of motion I’ve noticed when I do move in VR games, if I move straight ahead, it’s okay. So what I tend to do, I move straight away and then I turn and then I go straight ahead. You know, once it doesn’t it might once my inner air in the body, it doesn’t work. And that’s why the Quest Pro, I think, is very good, because you do have peripheral vision. So even though you can see things moving in front of you, it helps anchor, at least for my biology. And I think that’s why some of the things in the Vision Pro are 3D but framed. You know, like the dinosaur experience or whatever. It’s not all encompassing around you because you have enough of an anchoring. So the balance between anchoring and motion is interesting. Yeah. Fabian. And then we move forward, literally.
Fabien Benetou: Yeah. To to go back a little bit on Peter’s point about I mean, it’s a great book. And overall, the work he’s done is it’s entirely pragmatic basically, I think has a magician kind of it’s like, okay we have different theories and ideas, but let’s see what works, basically. And I think doing it commercially is good in the sense that we know what works for a lot of people. I don’t want to say for most people, but like because they need to make money off it from Disney, I think then then yeah, it’s a like it’s not just for somebody who is super experienced in VR, for example. But most importantly, on the very last point it’s also that’s why I mentioned the magician part is. A lot of it is managing attention. And indeed, for example, if you make some noise in that part of the world and I’m looking there, you on the webcam can see behind me, but I can’t. And if everything literally changes behind me, it’s it’s not a big deal. And there are a couple of games like this, like I forgot the name, but there are basically non Euclidean space because the space there in front of me is okay. But for example, what changed behind me. So I can, for example, make a staircase that when I turn around is there wasn’t there before.
Fabien Benetou: So the world kind of grows at it. And I think it’s again, it’s kind of my point earlier about it shouldn’t have worked or I should have wanted to have a platform where teleporting up because it felt safer. And yet somehow, I guess because agency is okay, a lot of it is very it’s I think it’s better to just try and not make too many assumptions and maybe managing attention the same, the blinking part to make teleportation safer. Somehow it works. Somehow it feels like we’re dump mammals walking around. Like, how can we be tricked so easily? And yet somehow it works. Yet somehow, like black space and boom! I can accept that I’ve been thrown away somewhere else. And it works. Just kind of crazy. But it does show that, yeah, our attention is very not just flexible, but but if we learn to shape it the right way, it enables us in that virtual space to do a lot more, including things that are normally not possible. So I think the kind of magician mindset with the look at my assistant and not my hands doing the magic trick is, is quite powerful, actually.
Speaker5: So.
Frode Hegland: Yeah. Really, really important. So Paul Smart, colleague of Mark Anderson and me. The philosophy, modern philosophy of how the brain works is, as I think we’ve discussed, this kind of interruptive, and the brain is a prediction machine and it gives you awareness with something. Suddenly interrupts you like that, right? Adam couldn’t not react to that. But anything that is as you expect, we don’t react to. So to manage expectations, I think that’s what you’re talking about, Fabian. And also you, Peter, that you are aware of this kind of motion, you know, at a deep level makes it makes a difference. One thing that Adam and I talked about today was because you talked about behind maybe as you do work and you save something, it goes over you and behind you. So like, in the the backup servers on the Mac, you can turn around and you have a history of your work. So maybe we decide on something. Spatialized. In our environment. But I also like the non-Euclidean tenets style, where you can literally warp space to your effect. You know, that’s when it’s becoming. Really interesting. Now we’re coming up to the first hour. Any. What do you think? Is this a good time to move on to our scenario? To see what I think?
Dene Grigar: So we need time. We need time for that. Okay.
Frode Hegland: Yeah. Do you want to do screen share since you have the latest and greatest version?
Dene Grigar: Let me see. I didn’t make any changes to it at all. Froda.
Speaker5: Okay
Dene Grigar: You go ahead. I didn’t have time.
Speaker5: Time is.
Frode Hegland: Precious. Okay.
Speaker5: So.
Frode Hegland: So what? This is something we’ve talked about a little bit before. And this is. Just going to share.
Speaker5: I.
Frode Hegland: Can you see this?
Dene Grigar: Yeah. And can we explain to them this is not the It’s not a script.
Speaker5: Yeah.
Dene Grigar: But it’s a kind of. Sorry. It’s kind of walk through like, you know, kind of moving towards a script and also towards a kind of a technical communications document that lays out the the format of how we function in the space from beginning to end.
Speaker5: So finished.
Frode Hegland: So it is written as a script. It is not a script.
Speaker5: Okay. Sorry.
Frode Hegland: Developer is asking question. I’ll answer later. Right. It’s written as though it is a script, but it isn’t. But let’s pretend it is, in the sense that. Let’s pretend that Andrew has all the time in the world and there is some kind of a voice tour in this thing. So let’s not get hung up on the language, so to speak. Let’s look at what this says should happen. Some of these things are not what Andrew has built. There are a few different things that just means we need to discuss it. Not a single thing in here is decided. Interrupt at any time. Please don’t use a hand because I can’t see it. Okay, so in our loading screen, as we have now, we’ll have similar text to what we already have. We will also have radio buttons for right and left handed, so that in the future when this thing refers to, you know, we don’t want prominent hand or anything gets confusing. We just say your right hand. Your left hand. Also, as we have now an option to upload a library, we will have that, but it’ll also say which library is active and when it was updated. So you have a feeling whether you are really looking at the library you think you have. Also, we are looking on the introduction for the demo. It will always be there. This stuff, Andrew, specifically for you seems reasonable to list especially this, right? If we decide to do it. Hang on. I’ll make it a bit bigger.
Andrew Thompson: Yeah, I can’t really read it, but what you described verbally seems reasonable for that part.
Frode Hegland: Okay, cool. Simple. So I’ll just read this. You don’t have to read the screen, but.
Speaker5: So.
Frode Hegland: The initial environment and when we’re talking about environment, what we mean is the visual surround the non-active bits to make sure we have the shared language for this. Has nothing there, only dark gray as we have now. Users will have the option to change environment background later.
Speaker5: And I just realized.
Frode Hegland: I needed to use that word again. So also for the demo, only single document appears in front of them. It isn’t a library, so to speak yet, so the voice would say something like welcome! We start here in the library. For this demonstration, you will have access to the ACM Hypertext Proceedings notice. Mark. It doesn’t say what you see in front of you. Just point to it and pinch your fingers to open it. And here we get to the notion of maybe helping the user. So if nothing happens for a while, if we have the voice, maybe it says something like, please point to ACM hypertext proceedings by pointing to it with your index finger, making sure that the line extends from your hand, lining up with the proceedings. Then pinch your index finger to your thumb to open it.
Andrew Thompson: Just gotta jump in real quick because I know you said interrupt if something’s incorrect. We don’t we don’t raycast out of your finger. That was an old design we had, like two months ago. And people didn’t like it, so it’s it’s from halfway between the index and thumb, and it’s kind of its own thing. Yeah. Oh, yeah. It’s a little bit hard to explain to the user where it’s showing up. So maybe for this part we have the, the, the line always visible and then the line fades out once they complete this first task or something.
Frode Hegland: Yeah, I know that. And I think you made a great decision for that. I purposefully misled the user because, as you say, it’s too hard to explain that. So it kind of feels like it’s the finger maybe slightly off. But anyway, this is. Okay, thank you for interrupting, but it’s not really something to change now anyway.
Speaker5: Gotcha.
Andrew Thompson: Okay, if pointing if you’re trying to, like, point at something though, and it’s not, selecting the user is going to get frustrated very fast if that’s what they expect to happen.
Frode Hegland: It should be point and then do the index finger pointing right.
Andrew Thompson: That’s. No, it never points from the index finger. You do have to do the three finger curl to turn it on. I feel like this is best described to the user with, like, an icon or some kind of like image that shows up in front of them, showing them the hand gesture. That’s easy to pick up on what we’ve done for some other VR projects.
Speaker5: Yeah. Okay.
Frode Hegland: That’s that’s very good. I’ll write that down. Show hand doing this.
Speaker5: Yeah.
Frode Hegland: Let’s really have a machine hand in front of you doing it, doing that motion, you know, and they can turn it off. Thank you. That makes a lot of sense.
Speaker5: Right reading.
Frode Hegland: By the way. It looks like a long script. It isn’t. You’ll notice the library fading and moving away from you in the background. This is when you have touched the document to open it. You’re now in what we call the reading room. The ACM Hypertext Proceedings is shown to you as a list of titles of the papers that contains, in the order they were published. We expect that one of the first things you’d like to do is to determine which papers are relevant to you, and which you’d like to go through first, so that you’re up to speed on this conference. This is what we will go through now. Again, please. Of bigger issues like this. If you disagree with this aim. Absolutely. Totally. Fine. Just write a note and we’ll go through that after. Because now we’re just doing the mechanics right. Documents how you can see what papers might interest you. There are what we call elements in the space. You can fully customize these. We have just provided you with a set of starting elements. A key element in this collection of papers is who wrote them? Which is why we’ve set up a list of all the authors on your left. You can select any author or authors, and you will see that the papers that are not authored by them will fade a bit, letting you focus on your selection. The top of the list of authors you can use as authors, this is both a heading and a virtual envelope. You can tap on the heading to fold all the authors into it and tap again for them to reappear.
Speaker5: So that’s an.
Frode Hegland: Action we may or may not have. I’m not sure if it’s useful, but that’s the kind of thing that’s being presented. In addition to pointing, pinching and touching the spare on your wrist, there are a few other useful gestures for you. It’s actually really just two. Make a fist with your left hand. This gesture allows you to select more so it would be the same as a command key or a shift key. Right, guys? First, make sure you’ve selected one or more elements and that one or more papers are there for hire, and that one or more papers are therefore highlighted. You can now choose to select more, as you might do in a desktop computer, by holding shift or command. You do this by making a fist with your left hand, as though you are grabbing and holding on to something. So. And then you can keep pinching.
Andrew Thompson: Other people. We elaborate a bit more on what that is because I have never heard of this functionality before, so if I’m implementing it, I need to know what it is.
Frode Hegland: This is the point of showing this. This is a new thing. It is just a suggestion. And that is exactly what we’re discussing. There aren’t that many things, so we can spend some time on it. We need some way of when you have selected stuff to be able to select more without losing your selection, and we don’t have a keyboard. So one of the things Adam was talking about today is, you know, it’s very easy to get misreadings, as we found out. But if you do one hand to another somehow, that’s very deliberate. So the whole idea here is quite simply, I’m pointing, I pointed, and now I’m just indicating the system with my left hand. I want to hold on to these, but I also want this one, that one, and so on. So how do you guys feel about that being a potential thing when I let go of the left hand? Obviously flippable or left handers then is this still selected and I can move them, but I can’t select more. Then I’ll deselect.
Andrew Thompson: Is this for the map view because. Or are you talking about selecting multiple documents?
Frode Hegland: The map view, so to speak, is.
Speaker5: This?
Frode Hegland: Yes. We’re kind of always in a map view. It starts with this. It starts with the list. But you can start doing things, you know, like having elements on the side. You can start moving things around.
Andrew Thompson: So you’re saying the map view is part of the reading space, the workspaces we’ve been calling it.
Speaker5: This is what.
Frode Hegland: We need to decide together. I am confused here, and I’m thinking that first of all, if we ignore the map kind of thing for a moment and we think just a column of things, the elements in the room, such as, you know, George Washington or whatever keywords or LMS are the things in the room that can change what you’re looking at. Yeah. Denny.
Dene Grigar: Yeah. One thing we talked about last week in photo, and I worked on this for about an hour yesterday. I’m trying to come up with common terminology, but the idea is that the user puts on the headset and enters into the virtual space. Right. And they land into an they teleport into an environment. The default environment is that gray environment. We’ve already built the minimalist look. There could be some other choices down the line. Perhaps a classic reading room, perhaps a nature based, but they’re in an environment, right? And so then they are able to access the ACM library or whatever libraries in the future are available. But right now the ACM library and they tap on it. It opens up now. When it opens up, it becomes like a workspace for them. Right. So their workspaces in front of them kind of like what you how you organize your desk. It’s organized here. Right. And so that that’s what we mean by workspace. What is niggling us is the term that we’re using called map. And that’s the thing in which you move things around in your workspace and you’re able to reorient. We don’t have a word for that. We’re using map right now. And that that’s been driving us crazy. And Mark’s got his hands up. So I’m hoping Mark has a brilliant idea. Well, that’s that’s a scenario. So Mark, go ahead.
Speaker5: So.
Mark Anderson: To me and obviously like everyone, it’s based on experience. So obviously not everyone else’s experience to me, a sort of map basically implies an area of free form use. Now a map could be the whole of the environment. So the word that springs to mind if you’re talking about the space you’re in, I just, you know, it is the holiday. It’s the environment you’re in because it’s it’s endless and unceasing. And one other quick thought listening to this is, is that I think it’s really useful to do this. So this is what we want. This is what we want to people to understand. I’m not sure we’ll be able to do it like this, because my experience of writing things like this is the maybe the question is, why am I doing this? So I think the way we need when we’ve worked out what we want them to see, and this is really good because this is this is sort of paring it down is to then recast it in the form of you’re going to do in other words, they’re doing it for a purpose at the moment. We’re telling them things they might do in an in an environment they don’t yet understand. Which means that they will they they lack the experience of us as the authors in writing this.
Mark Anderson: So I suspect what would bridge that is in a sense explaining pre explaining that you’re going to do something and then you explain how it is because that makes more sense. Oh, right. If I didn’t know how to do this I wouldn’t be able to do that that action. But I don’t think it’s necessary to do this now. I think this is really important. So my point isn’t a critique, but I think in terms of making it useful to the user. And I did one other thing I did wonder is are we able to effectively do like a sort of picture in picture? Because one the other thing is it’s often very difficult to explain some of these movements, which are obvious after the fact. But perhaps if you literally just saw an animation of somebody doing it and that could be off to one side or something as, as sort of part of the help. So if you didn’t get it from the explanation, you could have it shown to you because most people can probably repeat it once they’ve seen it if they didn’t explain it. Yes. Fred.
Speaker5: Yeah.
Frode Hegland: That’s what I think Andrew suggested for the introduction, the pointing and so on. So yeah, if we can, we will definitely have a hand doing it. If we can’t implement it by when we’re showing it, it’s okay. The person putting the thing on the person will literally stand there as a human being and show. So yes, it’s crucial to not just talk it, no question.
Mark Anderson: And if that’s not possible, another thought is, I mean, in a sense that there’s no reason we can’t show an area in our environment that even if it’s just a video of somebody’s hand doing it, it’s whatever it makes. It’s whatever we need for the person who has no experience before to just get over that hump of, oh, right, I need to make this shape or do this gesture. Yeah. So I don’t think it’s necessary. It’s nice if we can do it in the metier of the of the environment we’re working in. But if that’s if that just proves to be difficult in the short term, one possible sort of fallback is to is is in essence to show some video of something possibly in a window. And I know it’s not as elegant, but I just park that as a thought.
Speaker5: No, no, no, I.
Frode Hegland: Don’t have a problem with that. And I think we should ignore that for now because it’s important. Meaning that I think that what we should probably shoot for, for the actual hypertext conference is literally a computer monitor. On the side of this with a video presentation screen captures all of this stuff. So when someone puts on the headset, they know this stuff. So that’s why I’m saying we should ignore it, because we will elevate it to its own experience. Dean, what do you think? Does that make sense?
Dene Grigar: Yeah I agree.
Frode Hegland: Okay. So someone’s waiting for the headset to go through okay. So let’s go back to the notion of map. Andrew, for the time being, I feel we are talking about there is no map or no map. It’s always movable things everywhere in space, which is why management of these things will be crucial. So currently, right now we’re breaking a lot of what you’ve done. But don’t stress too much. We may very well circle back to it. Okay.
Andrew Thompson: Well that that’s not particularly encouraging saying it’s broken now. So then I fix it and then that’s broken and we have to put it back. That’s no, no Andrew.
Frode Hegland: What I mean is this is a really important session. We’re going to agree to some things here.
Speaker5: Right. So let’s.
Andrew Thompson: I started doing some work already on the map. It’s currently its own space because that’s how it was presented to me. I can change that. But not right now, because it’s halfway in development, so I’ll have to, like, implement it and then change it.
Speaker5: What I mean.
Frode Hegland: Andrew, is don’t worry about it, because we’re talking loosely here. Those of us who don’t have to code. By the end of this discussion today, we will have made some decisions. And you are 100% part of those decisions. So if you say that sounds great, but not now, or are you crazy? That’s really important to us. So this is not something this is not a change being rammed down your throat. I’m just saying that I’m aware that what we’re presenting here is quite different. But, you know, let’s have it mellow a bit and see how it fits.
Andrew Thompson: Cool.
Frode Hegland: Yeah, just. I’m trying to take a step back, but it shouldn’t, you know, come crashing down on you. So. Okay, so it seems like for now, the notion of I want to select more, you do a graph thing and then you point to things and you can add more. That seems to be a reasonable thing maybe. Right.
Speaker5: That’s all we.
Frode Hegland: Want out of this. If someone comes up with a better gesture. Fantastic. The second one is much more pertinent to the question of what space are we in? Because we used to be very. You’re in this sphere or that sphere. Now we’re talking about a pushing back thing. So I’ll just read this. Pushing your left palm forward. This gesture allows you to focus. So if you’ve selected some items, documents, elements, whatever. If you now take your left palm and push it away, everything that isn’t selected will snap to a further level away and then fade out a little bit. Meaning I can work with this other stuff now. Would I keep having selected? So that gets done into what you’ve been talking about a few times, Andrew, about having spaced kind of the x, y, z axis borders. So that you can have something in the background, as it were. And that probably gets very much into the idea of what this space is, because if you open things and you know, these are the things I want, push all that other stuff back. So, you know, with that motion you feel it’s not back to kind of a kind of a library space, but then you can go to another document, open that do things and push something back. So you start having it a 3D version of I’m working on this, but not that Mark.
Mark Anderson: Yeah, I’m struggling to follow. I mean, I get the bit about focus. I don’t really understand. I’ve read it several times. I can’t I can’t visualize what’s going to happen. I think what you’re referring to is that you’re trying to separate two bits of content.
Speaker5: What you’re what.
Frode Hegland: We’re trying to do is.
Speaker5: Okay. And just to.
Frode Hegland: Show you real.
Speaker5: Quick.
Frode Hegland: So let’s say we have something like this, right? Yeah. Through whatever reasons I’ve selected this stuff now, I don’t the rest of it. I just want it to go away. Right. For future views. I do will only to do with this. If I write a new thing, the other things will be ignored. So the notion is simply that I now have done that. I do this action. Sorry. This action. Yeah. Everything goes into like, let’s get glued onto the wall like a writer’s room or murder wall, you know, whatever language, it’s in the background, it’s out.
Mark Anderson: So which bit the non-selected or I mean, this, this, this and the fact you’re using a picture show me screams out that it’s best not described with words. I’d literally show someone a picture.
Speaker5: No, I.
Mark Anderson: Agree, because it’s so much easier to understand because.
Speaker5: Yeah.
Frode Hegland: That’s why we need a video. You have convinced me of that. We have to do it. You’re absolutely right. So and the reason I keep emphasizing this one would be with your left hand is because with your right hand or your prominent hand or whatever it is, it’s what you’re doing things with. So that’s why it’s more easy to understand. You just push away what you don’t want. Right? Because then the nice thing is, you should have some sort of an interaction to point to the background in the future with your right hand pinch to bring it back.
Mark Anderson: When you say your right hand, you mean the you mean the chosen dominant hand because we’re going to make. So no, no, just just so I fully understand. Yeah. So basically it would be better to write it just for the understanding of this for the practice that dominant hand and non dominant hand or something like that. Because he used left and right. It has especially if you’re left handed you have to then translate round or to work out you know is there a real meaning in it having to be left or is it, is it to do with it? So what what the reason you’re using a particular hand is you’re using, you know, one that doesn’t have the tool on it or something. Which is which is fine. One other thing I just picked up on this. We may we may need to funnily enough, define focus a bit more because I have a sense that it may have a slightly slippery meaning. So. This focus as in almost in terms of selection. There’s focus is in or focus might be the part of the environment that you want sort of close to you in that sort of focus. In fact, the way it’s described here is doing something different. It’s actually defocusing a whole lot of stuff. Which is fine. But it just says to me, we probably need to, you know just iterate a bit on that, on that term. We’re not.
Speaker5: Using that term.
Mark Anderson: Out of something different.
Frode Hegland: We’re not using the word focus in the description here.
Mark Anderson: Well, it says it just allows you to focus. That’s what I that’s what.
Speaker5: Yes.
Frode Hegland: In the kind of spiky bit but in the. Yeah. All right. Okay.
Mark Anderson: Well that’s what I mean. Sorry. And I know it’s difficult because, you know, having written the words, everything feels like critique. And that’s not where this is coming from. I’m just saying it’s just reading. So take it. Okay. What did I think that meant? What did I what or rather, what did I intuit this meant? What versus what did I read. And that’s really what I’m picking up on.
Speaker5: Absolutely.
Frode Hegland: So the last of the three gestures is that
Speaker5: The palm of your hand.
Frode Hegland: Right or left? I don’t know which one makes sense for this. If you look at it straight towards you for a second, you will get a cheat sheet showing you what the gestures are like. Literally a menu. Show me how to do this, that and the other. So that’s just like a little guide thing. So we’re basically done. We’re going to now look at a little bit of the sphere.
Speaker5: So the whole. Okay, I’ll just read it.
Frode Hegland: You can move any element anywhere you like, but this can become a bit cumbersome. There are therefore controls for what is displayed, which you can access by tapping the sphere on your wrist. This will produce two primary set of options that appear as menus floating in front of you elements and layouts. And when I say floating in front of you, I’m now thinking of the most boring, unimaginative large text menus. One here, one here to start the discussion. Maybe we go back to that, but we should think much cleverer, right?
Speaker5: Yeah.
Frode Hegland: Elements lets you toggle which elements are visible, and layouts lets you specify how you want to layout selected elements. You will have noticed that you also have options for settings and for help. Settings is where you can specify another library or set of libraries and choose your environment slash background you’d like to work in, and what digital voice avatar you would prefer. If we have such a thing, these are not enabled for this walkthrough. Help is available to give you more information in written or spoken form anyway, so the simplicity of this to begin with is quite simply, you have chosen a document. Excuse me? A paper or papers generally work with one at once, and within the elements you’re choosing the stuff that is around it, so you can choose to show all the keywords you have that are a person.
Mark Anderson: Yep. Did we explain why we’re in a sphere? Because the one thing I know is, when I’m standing in actual reality.
Speaker5: I’m in as a.
Frode Hegland: Thing to touch.
Mark Anderson: Oh, I see, you mean the control on the wrist. Okay. Sorry. I thought you meant. I thought you meant the display space around you. Sorry.
Speaker5: Yeah. Yeah. Sorry.
Frode Hegland: So the thing is. This is it’s a little bit contentious here. And we have
Speaker5: It’s a some decisions to make.
Frode Hegland: So. An element can be as simple as a keyword like George Washington, as I keep using. Right? And you can choose to hide and show all. Elements that are people. The beginning to wonder now if these elements would have to come from the document, or if they’re from the user.
Speaker5: They probably should come.
Frode Hegland: From the document and the user can have their own.
Speaker5: Well, I.
Mark Anderson: Take you back to the suggestion that these are just they’re all points in space, in the environment. And they are basically their appearance is clothed with whatever metadata we give them. So in the initial instance, you would expect it to come from the document because you haven’t done anything to it. If you want to apply further information you could, but I think that’s too complicated to do in a demo. We’ve already given someone an awful lot to understand who’s never seen this before. I think that I think the, the, the bigger fundamental point and one of the things that that’s so fundamental in the way this shifts us away from paginated print is here is the information, you know, it’s whatever you want it to be. And you can it’s malleable in virtually any form you want it to be. Because what that thing, what that object is, you know, one of the sort of fabienne’s ice cubes or something. It doesn’t matter if it’s a green cube or a blue cube. I mean, the blue. They may have meaning and you can assign them meaning. But I think I think the wrong evolutionary path to go down with this is to start thinking that you’ve got somehow points in space that are different in any other way but the metadata assigned to them, because I think you’re just forcing a sort of set of classes that aren’t needed, and actually then make it more difficult to do the wonderful transformation stuff that we can do in this space.
Frode Hegland: Okay, I do think we need to decide on some of this, because we do need things in order to have things. I don’t think we can be too general about it, but there will definitely there will definitely be keywords that are in the document. And the document has said, I have these keywords.
Mark Anderson: That’s fine. So you have a point that knows it’s a keyword. It doesn’t have to be a keyword object. It’s just a point in the space that knows it’s a keyword. Okay.
Frode Hegland: There is a thing, though. George Washington is a person. So that means that you can choose to show and hide all persons.
Speaker5: Yes, but.
Mark Anderson: But all you’re doing is you’re making a selection across. Yes information. So it’s a question of whether what I sense is that there’s a, there’s a sort of tension here between do we design back from what we vision, what we imagine we will visualize or do we design out from the information we have? You can get to the same place. The one thing we want to avoid is doing both. I think working back from the visualization, from experience always ends up difficult.
Frode Hegland: Okay, this is an important point of disagreement, because I could imagine having a set of elements that are important people to that are important to me and the field of hypertext history, for instance. And when I load a paper or a proceedings from hypertext, I want to see if any of them refer to them, because I find that level of history interesting rather than some other sets of keywords. Right. So that’s why it’s important for me to be able to assign that and a really important notion. Sorry, Peter, I just have to continue this whole thing and we’ll get back to it is in addition to show and hide the elements, which, by the way, Fabian should be able to include a smart element, which is my really dumb name for the kinds of things you’re making. Please give me a better name to insert. That’s just what should be on screen or not. So Mark and I will have a longer discussion on this with all of you, and it’s useful, but please be aware that the other one is layout, which some of them are simple. Align everything I’ve selected to the left center, spaced out horizontally vertical, obvious, simple and useful. But then there’s the notion of a workspace. A workspace is all the stuff that relates to the information in the room, so it does not include the background. You can be on the beach in Hawaii or you can be in a gray room. A workspace doesn’t care about that. So that means that what elements are in the room and where are they? Right. So you can, for instance, have a open up a document to it, a workspace where you have lots of hypertext people over here and this and, you know, you really set it up and then you snap your fingers, you load a different workspace. And suddenly now it’s about a timeline or other stuff, and the main text stays the same because that is the text. That’s the current thinking on this side of what a workspace is. Okay. On that point, over to you, Peter.
Peter Wasilko: Okay. Oh, my train of thought derailed during your last segment there. Maybe dynamic would be a good name or programmatic instead of smart for the elements there. Also in my own work, I’m trying to tease out what elements I can extract from the HTML markup version of the files that ACM provided. So I’m working on an island grammar to try to tease out bits that might be potential elements. I’ll see what kind of data I can successfully scrape from that in a semi-structured format, and that might suggest other possibilities that we can’t see yet.
Frode Hegland: Okay. Thank you. And Mark.
Mark Anderson: Yeah, I’m just trying to understand this thing about the special people, so I mean it. So in your data set, you have some things where a boolean is, you know, special to me or special to fraud is true, which means that when you load them, you want them to be present. But I don’t understand how they’re more special than anything else. They’re all just elements that are available. I still, you know, it’s just a bit of metadata. I don’t I don’t understand why it has to be a separate class.
Frode Hegland: It’s separate.
Speaker5: Because.
Frode Hegland: A document will, if it has visual meta or equivalent, have certain named keyword entities in it which you may not be aware of.
Speaker5: Right.
Frode Hegland: So you may if you see the tags, keywords, names, whatever it is based on the document, they may be off to a specific space where you have, let’s say, a dedicated list of space for whatever the document has. And then separately, you have your own interests that are always there in that workspace.
Mark Anderson: So okay, so you have two sets of keywords from different sources, but they’re all just keywords. The fact is some of them are derived from derived from an object or they’re basically they are properties. They are metadata of the paper object, say you put in the space, but you basically have a essentially predefined separate set of other entities that are sort of keywords that are of interest to you. And that’s absolutely fine. I don’t see any disconnect in that. It’s but they’re all it’s just getting away from this fact that I don’t understand how these are different classes of things that they’re all in. You know, you’re just talking about two sets of keywords. It’s just some of them happen to be ones that you’ve predefined and other ones are drawn either because they are, you know, stored as part of the document or maybe they’re dynamically extracted by some like an eye thing. So I really don’t I don’t think there’s a problem there at all.
Frode Hegland: It’s a management issue of how to have it in the space. So when we talk about exploding documents or pop up books or 3D documents, it kind of gets into that because you should be able to have all these different, let’s call them map views, so to speak, where you put these elements around. And hopefully at some point, some of these elements will be able to do more active things in the room, like maybe they are literally magnets drawing other keywords to them. All those fancy things will probably be useful. So you want to be able to spend time laying out a room for, okay, I’m not going to study history. So any of the keywords that I have that are about history, I want them here, there and there.
Mark Anderson: Okay. What I hear you what I hear you describing actually, is you’re actually describing the informational aspect of a workspace.
Speaker5: Yes.
Mark Anderson: So, so again, that’s fine because you’re basically you’re just saying that for a workspace of this type I want from and effectively the other thing that’s implicit in what you’re describing is that because of the library word that essentially you and, and I see why that you will have a library of information objects, the most obvious one being types of keywords, you know, groups of keywords that essentially don’t derive from your, your library of references. They’re things that you they’re basically they’re almost like annotation stuff. But this is something that you will have done personally. So this is another set of data that that has emerged that isn’t the library but sits with it. And then you just draw from that. So you would know that if I’m going into my personal workspace or my, you know, my workspace three, which happens to be about history, I will want all the things that I’ve tagged as relevant to history to be elements in that space. And then you can decide where you want them. But again, they’re all there. They’re nothing there, nothing special. What gives them the specialty is, is the fact that you have. The that you’ve drawn them for a particular place, which is, as a way, your personal tag library. You’ve you’ve assigned them as being important to the task you’re doing at the time. And you may further have sort of spatial layout type requirements for them and, you know, that are that, that are of your choice. And that’s fine. And so again, nothing of that suggests you need a special object for it.
Speaker5: Okay.
Frode Hegland: It’s a management issue. Right. Just in terms of where are you going to put it in space? I think the system needs to know. Here is Mark set of things for history. By the way. Mark, here is a choice you can make to see these tags that happen to come with the documents which you may or may not have. It’s just we have to have some sort of means through which we have them in a nice place in space, that’s all.
Mark Anderson: Well, your layout will tell you that because you say you want to have this there. So you will, you know, in setting up that layout, they will have a known vocabulary of of items that may have that may be shown or not according to your desire and a position according to your desire. And you may, for instance, also gloss that data with something about, you know, some, some aspect of relative importance to the task you’re about to do. Yeah. But that’s that, that, that just comes in your setup. That’s what you define in the workspace. So there isn’t a management problem as far as I can see. The management element is is is the workspace.
Frode Hegland: Yes, exactly. And I believe that is what we’re working on. So when you open the document and choose, you have got all these things in the space. That is your personal elements. Where do you just actually what is the system. Put the elements from the document. I’m not saying it’s the end of the world problem. All I’m saying is that we need to think of the interactions through whereby you can control where you want these things. That’s it. Because they are different.
Speaker5: In the sense of.
Frode Hegland: Where they come from.
Speaker5: So in other.
Mark Anderson: Words, they’re different in the sense they won’t be the same for everybody. So it’s something you are going to set up. And as we come to an understanding of the space, we may understand you know, patterns we can use to sort of preset them.
Speaker5: It’s also in.
Mark Anderson: A sense, what you’re describing is, you know, the individual space, but that doesn’t that then doesn’t scale because each each space is different. So what’s important is to draw out the common thing that that say there are six people. Well, there are seven people here that for the same workspace they may all produce something different.
Speaker5: Therefore it’s important.
Frode Hegland: It’s the opposite of that. Right, Denny? Andrew, all of us, we can have different personal workspaces. And thank you, Andrew, I take I took that language from you of how you want your elements. But if we all load the same ACM document that has been analyzed for keywords and names, that’s going to be common for all of us. And if we choose to view those keywords in that space, they have to go somewhere, you know, and they may overlap something else or so on. So I’m not saying it’s a big issue, but we should have an interaction to choose how those we didn’t write ourselves where they should be. That’s all. How do you have a solution, Peter?
Peter Wasilko: I think we really need to. Reify our queries so they can actually be objects within the space. So that a query that I develop to find something can be an entity. Maybe a little glowing green ball could represent queries that someone provided. And then I can share that with you. You can open that up and see what the subelements of the query were. And training across documents I think could be incredibly powerful. I’d like to be able to ask for all of the papers that were written by people who coauthored papers with individuals who studied at the MIT Media Lab, or give me all of the papers that are on topics that were identified as keywords in papers by.
Frode Hegland: That’s a little can you please write that? But that’s a little bit out of scope for this kind of interaction. It’s important to do that. But but that kind of logic would require the means through which that can be entered. And then we kind of back to command line. So yes, it’s important, but let’s just talk about simple blocks and space for a bit, okay.
Speaker5: Or do you think.
Frode Hegland: It can be done in this way?
Peter Wasilko: Well I think just the idea of being able to chain across papers so that instead of just asking about the immediate paper that I’m looking at, I want to be able to run the query against the papers that the paper is citing, even if we only went out just one step, just to show that it’s possible in the environment to chain into a search on the references, as opposed to only looking at the individual paper that you have in front of you. And that’s that’s really the difference between reality and what we can do in a hyper reality library. In reality, I’m looking at the paper and I would have to go and find all of those papers. But in our environment, we should be able to run a query against the things that the thing is citing.
Speaker5: Yeah, okay.
Frode Hegland: Let’s look at that in the next section, which is on. Yeah. Okay. Right. So I think we’ve gotten closer to this, but I think we all agree at least we need some way to decide what things should be in the space, and then we should be able to lay it out in different ways, manually, computationally, and to save and load workspaces. That seems reasonable, right? Because I’m hoping, as we agree to some of these interactions being necessary, we can all go off and think about how they should best be done.
Speaker5: Right.
Frode Hegland: So forget the voice command introduction. If we have it, that’s a bit much.
Speaker5: But
Frode Hegland: Okay. So document interactions in this list. In.
Speaker5: Tiepolo.
Frode Hegland: So this is very close to what we have now, Andrew. So this bit you can breathe. When you point to specific paper, you will see the authors names and abstracts appear to the right. When you point to a selected to a specific paper, you will see several options appear to the left of the title. Since then, they won’t move because it’s left aligned. These are the options that we’re currently thinking of having here. Please remind me if I’m missing any one of them is open so we don’t have to worry about double taps and stuff. Maybe we do. Information. I don’t know what we should actually call that, but that is to see the metadata about this document. And then there is one where you can add and edit the metadata. And then we have the importance of was it?
Speaker5: Yeah. And. Oh, really? Well, okay.
Frode Hegland: Sorry, everyone. Adam has become the doorman. Security because he’s got, you know, very serious looking head now. So, yeah. So two of the really important ones is one button, so to speak, that says this is a paper I’m interested in and another one that says I’m not interested. Whatever, which is the label. Are those reasonable options for when you point to a paper in the the abstract proceedings? Yes. Danny.
Dene Grigar: Yeah. Now that you’re saying that maybe we say for information and metadata, how about
Speaker5: About? Maybe.
Dene Grigar: So. Yeah. And then My mind just totally went blank.
Mark Anderson: Is it the documents?
Dene Grigar: Add metadata? Yeah. Add metadata.
Mark Anderson: Because the about strikes me as as that seems to be the document decomposition button in a sense, because that takes you into whatever we know or are able to compute about it. So in other words, you might just want the reference list or the abstract, or you just might want paragraph two of section three or whatever. And it’s going to vary by document to document because some are more structured than others. But it would seem that of that second thing that’s behind that is all these bits that at the moment you can only get by by reading by eye the document in paper or facsimile paper form. And one of the things that we’re doing here is we are blowing apart the sort of the, the sort of legacy of, of, of pagination and, and, and the document boundary because as, as Peter’s observation about queries comes in what what in the reading you’re trying to do often is, is to situate something in terms of the wider knowledge. And a trained academic reader has to do this in their head because the because there is no other way, because things are broadly imprisoned within our legacy tech, which is the page. Still, even though we’re not using paper. And so a really interesting part here, this is a really key part, I think is, is being able to tease that apart and if necessary, or if that’s where your thought process goes, you might literally go into the document and out of that and straight out through a reference to something else, because that is the thing your mind needs to resolve before you can say. Then go back and read the next section of the paper.
Dene Grigar: So we say read metadata and add metadata instead of about read metadata or recurrent metadata.
Frode Hegland: Yeah, just added something to what Mark said. So open now would mean open into its own thing, away from the other stuff. But outline I think, is what you were talking about, Mark, where the idea would be in. It’s still in the list, but it goes down a bit. So it’s basically what Andrew has built, but it shows in a box and you can choose to make it bigger and smaller. It’s an outline. You can click to read the beginning of section so you get much more of an accordion feel.
Mark Anderson: I mean, it’s exploratory this and I don’t I’m deliberately trying not to get too far ahead because I can see all sorts of things we could do, but they’ve all got to be coded and until we tried them, we won’t know really what works.
Speaker5: No, no, exactly.
Frode Hegland: So I’m thinking now about the about issue. We may not need the about issue because we have as we have now. You see the authors names and abstract appear to the right. So I think we can just delete that.
Speaker5: Right. Okay. Okay.
Frode Hegland: So then we have something like add metadata or manage metadata.
Dene Grigar: Manage is good.
Frode Hegland: Manage metadata. And then we have this super important Pin or hide. That’s what I use on reader. Now, I don’t think it’s the best language, but favorite and like is a bit social media ish. May be interesting.
Mark Anderson: If you want to keep your menu short. To be perfectly honest, I suspect you could just make the third one metadata, because it should be self-evident what you’re going to do with it. I mean, the first time you may guess, but then, you know, okay, this is going to take me to something where I can read more detail about.
Speaker5: I’m agreeing with you.
Frode Hegland: Three times in a row. Mark. Thank you.
Speaker5: That’s perfect.
Frode Hegland: Dina, do you agree? Just metadata is good.
Speaker5: Now that we don’t have a boat.
Dene Grigar: Oh, but that’s okay.
Speaker5: Oh, okay.
Frode Hegland: So you would prefer money?
Dene Grigar: I don’t want to argue. I don’t want to argue. Just leave it there.
Frode Hegland: These are the important things to argue. This is really important. So please, what do you prefer? Manage metadata.
Dene Grigar: I do because if I’m a user, I mean we. This is Koike, right? Clear? Only if known. We know when we get there, we can edit it. But if I. If I land there and I don’t know anything about this environment for the first time especially, or the first few times I’m seeing metadata, it’s like, oh, here’s all the metadata. And it appears and I don’t know automatically that I can do something with it. So manage tells me that I have control over that object or that item.
Speaker5: Yeah. I think we’re.
Dene Grigar: Also using can. I also mentioned something else. We’re using verbs open outline manage pin hide. So from a technical communication perspective what I’d like to do with this document, once we agreed everything is actually go back through and clean this up like a technically written document. Right. With all the consistency, all of the language, so that we parallel verbs.
Speaker5: So how about this?
Frode Hegland: Okay, everybody, look at the screen. Are you ready? One. Two. Three.
Speaker5: Manage. Okay.
Frode Hegland: We don’t need the word metadata, honestly, do we?
Mark Anderson: No, because I mean, if you’re having very generic things anyway, that you might be managing a document, you might be managing some other element in your space. So if it’s if it’s if it’s like that, that would seem fine. The other thing to bear in mind is if we want to have we have talked about the possibility of thinking in terms of a tutorial mode, in the sense that you might have more fulsome labels on things, which you could do. So I don’t think we have to be too, too dogmatic in our choices at this point. Agreed.
Speaker5: Okay.
Frode Hegland: So you get to the point that system says please choose open. So you now have the paper in front of you. You have several options how to interact with it. So Andrew, this would be exactly the kind of open you have now I think. That’s everything. Before was the library reading room thing. Logically, please stop me if I’m making nonsense here. So once you have the document in front of you, all I’ve done for the sake of initial simplicity is copy what I have in reader, which is, of course, lacking. This is an XR space, but it’s somewhere to start from. So if you select text and it’s worth mentioning if we have voice thing here, let’s say you want to look up a word on the screen. You don’t have to select it. You just speak, look up and you say the word. So this is an example because selecting in XR is hard. There’s an example where voice might be very useful. So then we can have options for dealing with an AI doing semantic annotations. This is what we’re now experimenting with in reader and doing find either on the document or online copy, lifting a section so you can have it as a snippet in the corner of the room. And then this is a it doesn’t even have a proper name yet. But the idea here is if our webXR understands the table in front of you, you can choose to place the document on the virtual document on the physical table, so you can lean on it to read it, for instance. All of these are experimental things. And also if you have ideas for how to do this.
Speaker5: Tell me, you know.
Frode Hegland: We’ll put it in. Or if you have things we should add, please also say. So these are basic things.
Speaker5: Okay.
Frode Hegland: And then we are skipping to whole document interactions. And what we’re talking about is some notion of having a toolbar, maybe at the bottom of the document, where you can toggle between asking AI to do things that are to do with the whole document, copy the document as a citation, and then you have view options where it sees an outline or an outline with citations, or an outline with names or all these things. And then you can of course add the notes, which is crucial. And you have this thing we call lift all, which takes all the text into a reading mode, meaning it’s like HTML, it’s free flowing text if it wasn’t already. And of course settings to manage that.
Mark Anderson: The copy to cite would copy citation seems more descriptive of what you’re doing. Copy two implies you’re going to have to do something else for you. Get your citation I agree.
Dene Grigar: So copy citation.
Speaker5: Okay.
Frode Hegland: I’m not sure, because.
Speaker5: In. So this is.
Frode Hegland: What I have in reader now because the idea is copy to site the document. So what I had the whole sentence copy citation. You know, there are citations in the document. That’s that’s what I’m concerned with. Like I can’t copy that citation.
Mark Anderson: Well, if you’re worried about scope, just say site document because that’s the document.
Frode Hegland: But you’re not citing it because you haven’t pasted the citation yet.
Mark Anderson: No, because the thing you just showed us was you’re looking at a whole document. And you want to you want to cite the document that you were looking at.
Speaker5: But that doesn’t.
Frode Hegland: Provide information to the user that what you’ve done is put that information on the clipboard. The user may say, okay, I cite the document. Nothing happened. That’s my concern with that.
Mark Anderson: I’m not sure I would have see that confusion, I don’t think I mean, I well.
Frode Hegland: The thing is, if you’re in an all encompassing XR environment, it’s a bit different, right? Because you can’t easily come and tap to author, for instance. I mean, we’re not going to be able to implement this probably now anyway for September, but okay. So copy to site document.
Mark Anderson: Well, I would just say site document.
Speaker9: I.
Mark Anderson: You’re asking for a site, you’re asking for a citation. The document you want to cite, the document you’re looking at. Yes. But I would expect that if that happened, that it would generate information for you about that citation. I mean, because think about it. What else would it do? Nothing. So the obvious thing is it’s going, you know, it’s it’s going to have that citation, which you can then do something with and almost a universal behavior if stuff gets put on the clipboard.
Speaker5: Well.
Mark Anderson: Even if you, even if you’re going to do something else with it. So I say copy the clipboard.
Dene Grigar: Can we say copy to clipboard.
Frode Hegland: Copy citation to clipboard.
Dene Grigar: Copy to clipboard.
Frode Hegland: No, but that’s just copy. What? What are you copying?
Dene Grigar: Whatever you highlight.
Frode Hegland: This is when it’s not highlighted. This is for the whole document. Because if you select text, then you have copy as text or copy quote. And when you paste a quote, depending on where you paste it, it has different behaviors. If you paste it in a neutral piece of software that we don’t control, it appears with quotes around it and as well as citation as we can. I don’t think this this is quite visual, meta specific. I don’t think we can do these things now. So I’m just going to put question mark, question mark after this one for now. Yeah. For the language.
Speaker5: Peter. You mute. You’re still muted.
Frode Hegland: Peter.
Speaker5: Peter.
Peter Wasilko: Oh. I’m so sorry. I was still muted. Do we have some sort of affordance within the environment that would let us create a named list? So a box type a name to be associated with the box, and then drag 5 or 6 papers or citations from inside the papers into that box, and then deal with the box as a whole.
Speaker5: I don’t.
Frode Hegland: Think we do, but we have here manage metadata. I think this is where we might write something or click something to add that information that would allow you to have such a virtual collection. But I do think it’s a very important topic, Peter, that we need the means through which the user not only spatially arranges these documents, but can also file them away, so to speak.
Mark Anderson: Well, this fits within the metaphor of bindings, doesn’t it, that we’ve discussed in the past. Because what you’re essentially I mean, another way of looking at your list is an ad hoc binding of n elements where to which you, the user attribute some significance. So, you know, I’m just abstracting the pattern out. But yes. And the binding is contains a list because it has to know what the contents of the binding are.
Peter Wasilko: Maybe Andrew could speak to that. How hard would it be to give us a box that we could throw papers in and attach a name to it, and then move that box around as an entity? And ideally stick the box on the clipboard or paste in a box that someone else gave us over the clipboard.
Andrew Thompson: So the pasting with the clipboard is completely out of scope for what I currently have. Maybe later down the line. I feel like that’s like far stretch goals. Putting stuff in a box and moving it. That sounds pretty reasonable.
Speaker5: Okay.
Frode Hegland: By the way, because of that comment I’ve added here for your delectation when you are looking at the document itself after between outline and manage. Annotate.
Speaker5: Something, you.
Frode Hegland: Know, so you annotate the whole document through some means. Obviously, it shouldn’t just be boring text box, but some means we can say whether it’s interesting or not. Yeah, I see a few nodding heads, so that’s cool.
Speaker5: Just quickly.
Mark Anderson: For Andrew. So I understand because it’s really useful to know about sort of clipboard things. Is this essentially because there isn’t a clipboard within the environment, or the fact that if you need to interact with the effectively the host computer server, whatever of the environment you’ve got to deal with all sorts of sandbox and security issues because you’re moving unknown data around. Is that sort.
Andrew Thompson: Of I mean, it’s kind of a mixed bag. If we want to just copy stuff inside the space and nothing leaves the space, then yeah, I can do that. Obviously we know that clipboard copying through software works as well. But it starts to get a lot more complicated. And what I’ve currently built is not sort of set up for that. If people decide that that is actually high priority, that’s what we care about. Now, I can switch focus and start working on that, and I could probably get it working. I just figure listening to this, it sounds like some of the other core functionality is more important. So I was just kind of prioritizing stuff.
Speaker5: Yeah, I know, I.
Mark Anderson: Think I think you’re right. And since lots of these things are nice to have and we kind of engineered them all at once, and I think it’s easy to explain to people anyway, if you say, well, it’s not that it can’t be done, but there’s a body of work that has to be done to make it just work. And sort of park it at that. Because I think a reasonable person will understand that there are some things that are imagined and possible. They’re just not possible in the instant at the moment.
Speaker5: Peter.
Frode Hegland: Briefly, we’re down to the last ten.
Speaker5: Minutes.
Peter Wasilko: Maybe we could just put in a couple of grayed out menu items as placeholders for that kind of functionality so that someone looking at the demo will see that we have plans to add that sort of thing down the road, even though it isn’t there yet.
Frode Hegland: How would you define that sort of thing as a menu item?
Peter Wasilko: An import export submenu.
Speaker5: Okay. Okay.
Frode Hegland: Let’s look at that outside of the core then. So while looking at the whole document you have. Toolbar, map, and outline at the bottom of the document. At least in this basic thing the toolbar itself gives you these things were just gone through, and then I’m going to skip ahead to go back. We have an outline and the outline, and this is what I’m experimenting with myself. It’ll either just show the headings or headings and all annotations or headings with only specific annotations, headings and all names or color keywords. You should be able to instantly click around on those and I’ll talk to you if we’re doing this more specifically, and I’ll probably show in reader how it works in now the map thing or whatever it will be called. We really do need a better name in this visualization of it fits in the same size as the document, or maybe slightly larger, but it’s not by default room scale. And it behaves very much like the map we have here in author. You move things around and you have inside that a toolbar to show hide elements and layouts, just like we had in the library level. But now it’s one document, one small thing. The reason it’s framed is that so that you should be able to have many maps around the room for different documents to compare them. We should probably also have a mechanism whereby they can overlap should you want to, or to be combined, should you want to. But I think that’s a little advanced for now. I’m not just in terms of Andrew’s massive workload, but also in terms of how we interact with these things. So if you guys are okay with it, we’ll let Andrew focus on the map as he’s doing it. Now play with it. And then see how these assumptions fit right. Because I’m very happy thinking loosely, but at this point I think this particular thing. Pause. Wait for Andrew’s reality. See what we do. Fair enough.
Andrew Thompson: If I could give a quick sort of elaboration on the map bit. From what I understand, the map is kind of it’s going to display all of the tags, all the names and all the titles. Which is, you know, really cool visually, but also it’s very cluttered. If we want that always visible, I know you can hide them individually, but I guess default display. Would you want all of the tags and names and stuff from each document to load in with that document? So it kind of like adds a bunch of floating elements around the document every time you load one in. I’m trying to picture how you visualize this because as I mentioned before, I was pretty sure it was going to get laggy, and I have since loaded in all the texts all the tags and stuff as a test. And man, it kills frame rate. It’s rough to work in there, so having that always visible is going to just kind of make the whole experience kind of not usable. You can do stuff in it, but it’s not nice.
Mark Anderson: Also, my experience of these sort of maps is is less is always more. But for the user, not not not even not even from the technical implementation, the more the more stuff you you actually stick in. There is another thing you have to process to work out this. So having stuff that you don’t need, need that you know about, which doesn’t need to be there because you can know about something you don’t see. This is why, you know, I certainly would generally want a map to open literally empty where you’d open it. You basically you take you take an object or a binding of interest, put it into the map space. And then and then you would begin to work on the map from that. Now if you had, as we’ve already discussed, you might personally have a set of metadata that might be about keywords, or it might be keywords, people, or something that you might choose to add to it. And if we get and, you know, as the design progresses, I would see that that could be a case where if you opened a map that was effectively a workspace, then you could inform the workspace that your workspace would always open with certain things shown. But as the general sort of underlying base object, I would imagine that basically what you what you have in the map is only the information objects you’ve sent to it, you see. In other words, you open something into a map and it might just be a document, it might be a number of documents, it might be some other object that isn’t a document that you’ve chosen to open in. Does that make sense?
Speaker5: Yeah. So.
Mark Anderson: Okay, so kind of in so just just to unravel that for me in the way that the that it’s unclear to you because I’ll try and express it better.
Andrew Thompson: It’s, it’s not that it’s unclear. It’s that it’s like intentionally vague because it can do a lot of things. Yeah. So it’s hard for me to be like, oh, yeah, that I can develop that right away.
Mark Anderson: Yeah. No, no, I understand, so but I think the first thing to develop is just being able to probably take something that isn’t in a map. So it’s in one of our initial selection things and put that into a map space and then circle back around to do all this, all the other geegaws around the side. Because the fundamental thing that you need at outset is to put the objects in the map space.
Andrew Thompson: When you say putting the objects in the map space, are you talking about every element of a single document?
Speaker5: I.
Mark Anderson: I believe not, because if the this is one of the things that sort of in a sense that we’re going to have to think and work on is, is how we represent the, the in essence, the internal link structure, how the parts of the document are linked together. Because you’re absolutely right. So you’ve got a document that obviously contains parts. Now you might want to decompose that document, but you might sort of, you know, for example, you might just say, well which of my interesting keywords does it relate to? And if that’s the case, I just want the document object there, and I, I want other objects around it. So the point of the the maps is strongest when it’s malleable. And I totally take your point about overloading it. But if we think in terms of. Pulling elements into it. Maybe that’s an abstraction that to start from. So you’ve got the display space. You’re going to put objects into it.
Speaker5: Right. Okay.
Frode Hegland: So this is sorry. You know we’re really running out of time. So this is what we’ve been talking about for the last few weeks. What are the things on the map. You know, that’s the question we’ve been asking. So now we have made it more complex today, but we understand it better. There is a map in the library view where some of the things are documents and the stuff around it. That’s one map. The other map is when you’re looking at one document. And these things can be keywords in the document that the document has said, has said is important in the metadata, or they can be keywords that you yourself have said, I care about this. I want to see if they’re in the document. So this is where we need to talk about the management of these things. Most documents will not have any keywords unless our system has assigned it to them or analyzed them, so to speak. Right. So I think what we should do is let Andrew get on with it, with the suggestion from us that we would like really easily to be able to do both approaches. One is open them up and there’s literally nothing there. You have to use an interaction to add elements. The other one is toggle everything on and then remove things. Because in this case Mark, conceptually I agree with you, but considering that you are opening a map of unknown stuff.
Speaker5: It might be useful.
Frode Hegland: To see what the what the availabilities are. So, in other words Andrew, my advice and request is just to do what the heck you want at this point, because now we need to learn from the real interaction rather than try to say what they should be.
Mark Anderson: And Andrew. I’m until the end of next week and the deadline’s passed, so I’m I’m a bit slow, but after that I’d be very happy to to spend some time directly at a time sort of convenient to you. And and and do and have some conversation and and and use tinderbox maps as a good way to explore this. It would be good because I think I think it will give you a fresh perspective.
Speaker5: If you give us a session.
Frode Hegland: If you give us a session next Wednesday, Mark on tinderbox, that would be very useful.
Mark Anderson: Okay. Yeah. Okay. I’m I can do. Yeah.
Frode Hegland: So in closing then, because we have less than a minute left is augmented documents. I don’t know what we’re going to call them. Exploded documents, 3D pop up book, whatever. What is that? Right. How how is that stored? The knowledge of how it should unfold? Should we think of that as a workspace for document?
Speaker5: You know.
Frode Hegland: Maybe that’s what it is. These are things we now need to start thinking about because.
Speaker5: Yeah.
Frode Hegland: We started shaping
Speaker5: 50 messages.
Frode Hegland: Great guys to expect me to read them.
Speaker5: Okay.
Frode Hegland: Any further. Any further comments? Questions for today.
Andrew Thompson: I have to run. But this was very enlightening, having a more of a detailed explanation of how you envision this project actually shaping out. So I’ll I’ll do my best. It’s it’s a lot of plan adjustments, but, you know, we can get to it, and it might just be a while.
Frode Hegland: You have been extremely good at doing your best. I think you should forget almost everything you saw today. Focus on the map as you think it should be now, because it’s much better for us to have a map and then we can say no, no, no. And that is part of the record of the work we’ve done. So then we we can say we’re choosing this, we’re not choosing this. So don’t worry about any of these things. This is for the general discussion that you’re you’re part of. But you don’t you don’t pivot on anything okay.
Andrew Thompson: Okay. Never had a clear plan on what the map would look like anyways, so it’s okay. I’m making it in its own space right now because it’s easier to test, so I’ll do that. I’ll get the demo out so we can test it, and then we’ll decide if we want to integrate it or how we want to have it show up, because it does feel out of place.
Speaker5: Even if we at the moment.
Frode Hegland: That is out of place and it’s a separate thing, it’s worthwhile for the project. We’re very grateful. Have a wonderful weekend. Wish you were here.
Peter Wasilko: Have a good weekend.
Speaker5: Yeah.
Frode Hegland: Bye, guys. And you and Mark have a medium weekend with us.
Speaker5: Yeah. I’ll, I’ll.
Mark Anderson: I’ll catch up on, on on my travel plans and stuff in due course, I guess directly. So. Yeah. Thanks.
Speaker5: Mark. Bye. Bye bye.
Chat log:
16:02:13 From read.ai meeting notes : Frode added read.ai meeting notes to the meeting.
Read provides AI generated meeting summaries to make meetings more effective and efficient. View our Privacy Policy at https://www.read.ai/pp
Type “read stop” to disable, or “opt out” to delete meeting data.
16:02:43 From Fabien Benetou : Secret door!
16:04:12 From Fabien Benetou : happy birthday everyone!
16:05:16 From Andrew Thompson : Nice plot twist showing up in the background Adam. Well executed haha
16:06:20 From Mark Anderson : Reacted to “Nice plot twist show…” with š
16:06:30 From Frode Hegland : https://public.3.basecamp.com/p/G9dZo9YA9taFppcWyGnf3mc7 AGENDA
16:07:13 From Frode Hegland : Reacted to “Nice plot twist show…” with š
16:07:51 From Andrew Thompson : Fruitbat is very outgoing, I think he counts as an honorary lab member with how much we hear about him
16:10:43 From Dene Grigar : Reacted to “Fruitbat is very out…” with ā¤ļø
16:11:09 From Dene Grigar : Replying to “Fruitbat is very out…”
Fruitbat canāt join us. He would bite us too much
16:13:16 From Frode Hegland : https://futuretextlab.info/current-testing/
16:13:31 From Dene Grigar : In my lab meetings, we always clap for the things people doš
16:13:45 From Peter Wasilko : Sorry Iām late, just finished brunch with Mum.
16:13:52 From Dene Grigar : Reacted to “Sorry Iām late, just…” with ā¤ļø
16:14:16 From Fabien Benetou : FWIW https://hmd.link?URL
16:14:22 From Fabien Benetou : (over same WiFi)
16:19:42 From Mark Anderson : Iāll bring my Quest 3 on Friday.
16:26:28 From Fabien Benetou : joint gaze iirc, forgot the proper name
16:27:49 From Frode Hegland : Reacted to “Iāll bring my Quest …” with š„
16:30:19 From Mark Anderson : virtual manicules: ā
16:31:20 From Dene Grigar : I would imagine it depends if you are accustomed to reading up and down or left to right. Itās cultural?
16:33:13 From Frode Hegland : Coordinates for how the user āorientatesā in the space. In concert, soft space grab and room move.
16:33:40 From Mark Anderson : Yes, having self as another (point) object in space makes sense, in terms of exploring a space.
16:34:17 From Mark Anderson : Aside: has anyone used PDF Liquid Mode? Invented 2020 but doesnāt seem to have made much impact. Would be interested to hear folks thoughts (offline/email is fine)
16:35:50 From Frode Hegland : Reacted to “Aside: has anyone us…” with š
16:40:56 From Peter Wasilko : The best movie version of teleportation was Jumper: https://www.imdb.com/title/tt0489099/
16:42:25 From Fabien Benetou : inner ear
16:42:45 From Dene Grigar : What is more common?
16:42:46 From Fabien Benetou : otherwise digitally often a mismatch
16:44:30 From Mark Anderson : So: avoid tasty looking poisonous fruit in our VR environments š
16:45:23 From Dene Grigar : that makes sense for the music selected for Beat Saber. The music starts slowly but picks up speed. This allows the user to assimilate to the VR environment. Makes sense. I thought it was to help users to level up, but it may be to keep them from getting motion sickness
16:47:12 From Mark Anderson : Is VR sickness a significant problem? I ask as Iāve seen little discussion of it other than a potential problem (though for some it may be a near constant resultāas for other sensory-based sickness.)
16:47:26 From Dene Grigar : Reacted to “Is VR sickness a sig…” with š
16:48:40 From Peter Wasilko : Curtisā book : https://www.amazon.com/Hyper-Reality-Art-Designing-Impossible-Experiences/dp/B0C7T3NYBW
16:49:00 From Frode Hegland : Made a Flow view in Reader which is similar. Motion sickness: Less on Quest Pro since I can see real periphery. Sloan is swivel chair though. Might usefully have vertical teleport/movement. Less motion sickness going forward.
16:49:49 From Frode Hegland : Reacted to “Curtisā book : https…” with š„
16:51:51 From Mark Anderson : Iām now imagining the Safety Case form for a non-platform teleportation method (no safety rail!)
16:53:42 From Peter Wasilko : A portal model is another alternative to teleporting: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.youtube.com/watch%3Fv%3Dcox7481IE6o&ved=2ahUKEwjFz8H4m7OGAxXYvokEHSmmA8QQwqsBegQICBAG&usg=AOvVaw3a2JywYO6jmlED8xS1GynQ
16:55:02 From Mark Anderson : [Aware name collision re āliquidā and āflowā re Frodeās tools vs. Adobeās feature. I donāt equate the two š ]
16:58:55 From Peter Wasilko : Iād want a British feeling library with lots of detailed wood panels, coffered ceilings, and ambiance worthy of a Sherlock Holmes movie.
16:59:30 From Peter Wasilko : Are there any spaces like that for AVP yet?
17:00:10 From Dene Grigar : Peter, I am imagining three different background choices, one of them would be the classic library reading room experience
17:00:34 From Peter Wasilko : And the other two?
17:00:38 From Dene Grigar : The default is the gray one we are using
17:00:53 From Dene Grigar : the third would be a nature-based environment
17:01:36 From Peter Wasilko : A rainy cyber punkish neon city scape out of Blade Runner would be a nice addition too.
17:03:32 From Peter Wasilko : Also noteworthy was the āCity of Textā UI from the 1995 movie Hackers:
17:04:56 From Dene Grigar : I love Blade Runner but the novel even better
17:06:54 From Peter Wasilko : Hat Tip: https://scifiinterfaces.com/2023/12/11/hackers/ for the images!
17:07:51 From Peter Wasilko : Also that page has a rather nice discussion of their practicality.
17:09:23 From Mark Anderson : Replying to “Also that page has a…”
Hacking your very own Gibson?
17:09:44 From Dene Grigar : Philip Dickš
17:15:39 From Peter Wasilko : Replying to “Also that page has a…”
I could do wonders with that much compute power!
17:18:23 From Peter Wasilko : Another mode of finding a paper is the progressively refined filtering model showcased in Star Trek the Next Generationās Hollow Pursuits episode: https://memory-alpha.fandom.com/wiki/Hollow_Pursuits_(episode)
17:19:48 From Peter Wasilko : ā Someone could’ve picked up an untraceable substance and carried it around the ship. The common link is that both Duffy and O’Brien were in the cargo bay with the failed anti-grav, and one of them was present at each of the other malfunctioning equipment; they could be carriers. La Forge, with the help of the computer, narrows the list of suspected reactants ā those that would not be picked up by a standard scan (15,525), exist in an oxygen atmosphere (532), and can modify the molecular structure of glass (5). He and the others then begin evaluating the five remaining substances one by one.ā
17:21:48 From Fabien Benetou : I did notice the “smart” part and got triggered š
17:22:05 From Peter Wasilko : Reacted to “I did notice the “sm…” with š
17:22:18 From Peter Wasilko : Replying to “I did notice the “sm…”
Me too!
17:31:20 From Mark Anderson : Query elements – I like!
17:33:01 From Peter Wasilko : Reacted to “Query elements – I l…” with ā¤ļø
17:35:34 From Fabien Benetou : (have to go in few min)
17:35:56 From Peter Wasilko : Also from a Systems Engineering perspective, we really need to provide a āHookā for dynamically loading custom plugins at run time, so they can be automatically attached to a menu and invoked. It doesnāt matter what they do, just creating a pop-up āHello Worldā would work as a proof of concept to justify follow on grant support.
17:36:29 From Dene Grigar : Can we say āRead Metadata”
17:40:07 From Andrew Thompson : Would it be strange to put just ‘Manage’ then to keep the menu to single words? Or does that lose the concept?
17:40:15 From Andrew Thompson : Ah, perfect
17:40:40 From Fabien Benetou : Have to go, take care everyone, for people in the UK, see you soon!
17:42:32 From Peter Wasilko : Back with Chocolate-Mint Coffee.
17:43:04 From Dene Grigar : Replying to “Back with Chocolate-…”
sounds sweet
17:44:42 From Peter Wasilko : Replying to “Back with Chocolate-…”
ā + š« + š
17:49:03 From Dene Grigar : I need to leave in a few minutes
17:52:51 From Peter Wasilko : And we should have a hook to eventually call up a command line interface, maybe just a grayed out menu name reading āchat with the systemā.
17:54:24 From Peter Wasilko : My initial Island Grammar can painlessly extract each paperās heading hierarchy.
17:54:57 From Peter Wasilko : So we could have a paperās topic outline as an element.
17:55:31 From Dene Grigar : I need to go. I have another meeting in 5 minutes and I need a quick break beforehand
17:55:34 From Dene Grigar : Bye folks!
17:55:41 From Peter Wasilko : Bye Dene!
17:56:46 From Peter Wasilko : Is there is a database anywhere of who studied under whom? I recall a few history of science papers that looked at researcherās pedigree tracing their doctoral advisors back a few generations.
17:57:08 From Peter Wasilko : *researchersā
17:57:36 From Andrew Thompson : Thanks Mark, I’ll keep that concept in mind here
17:57:50 From Peter Wasilko : (e.g. who was advised by Dougās advisees)
17:59:17 From Peter Wasilko : How hard would it be to include a concordance of words used in the full text of the corpus?
17:59:54 From Peter Wasilko : Great Idea, Frode!
18:00:11 From Peter Wasilko : Tinderbox has much to teach us.