28 February 2024

Frode Hegland: All right. There we are.

Peter Wasilko: Good morning.

Frode Hegland: Morning, morning.

Peter Wasilko: Oh, just out of curiosity. Frode on the apple Vision Pro. If you have a regular Mac app and it has more than one window, can you bring each of its windows out in a different positions in space, or are they stuck with only one window per application?

Frode Hegland: This is the most annoying part of the vision. The Mac mirroring is purely a mirroring of whatever your display is, although it’s said to be 4K, but you cannot open more than one, you cannot make it vertical. So it’s not taking the windows from the Mac, it is just taking the screen and kind of a dumb manner, so to speak.

Peter Wasilko: Well, hopefully that’ll be fixed in a software patch very, very soon.

Frode Hegland: No no, no, I think that’s going to be unfortunate. I agree it should be. But I think that’s going to be like that for a while. It’s just very, very annoying. I think it’s a bit architectural. Good morning Adam.

Speaker3: Evening.

Frode Hegland: Yeah. Oh, yeah. Of course. You and me are in the same part of the planet, aren’t we? Danny was here, but I restarted and she just went to get a coffee or something, so we’re ready. Have you both had a chance to look at Andrew’s work?

Peter Wasilko: This video that he just posted. Yep. Looks good.

Frode Hegland: Yeah, it’s a very good interaction as well. You’re going to get a link and put it in the group here. Hello, Rob.

Peter Wasilko: Hello, Mike. While I’m.

Frode Hegland: So sorry. That was two people as well. Did you also say something, Peter?

Peter Wasilko: Yeah. I’m going to mute my mic while I’m eating. Okay. So you don’t hear me chewing.

Frode Hegland: Yeah, that’s a very good idea. Andrew, I’ll. I’ll say it before DNA arrives. I am very impressed with this update.

Andrew Thompson: Well that’s good.

Frode Hegland: Yeah, that’s very good. It’s very, very nice. There’s plenty to do, obviously. And you have given us plenty of basis to do it from. So. Yeah, it’s very, very good.

Andrew Thompson: Have you had it? Hopefully so. What was that?

Frode Hegland: No, it’s just going to ask Rob if you’ve had a chance to look at it.

Peter Wasilko: Now I.

Andrew Thompson: Reading the.

Frode Hegland: Hello, Rob and Mark. I’m pasting in the chat here the link to our agenda. And in the agenda there’s a link to Andrew’s update. So when we get to that section you can try it. It’s really good. Jolly, jolly good. What? So Bruce will be joining us a bit today. Oh, there he is.

Andrew Thompson: Hello, Bruce. Hey, guys.

Frode Hegland: I was just telling Andrew that I’m very happy with this update. And you have actually experienced it in the Vision Pro.

Andrew Thompson: Yeah.

Bruce Horn: You’re really awesome, Andrew. It’s really fun. It’s actually more than that. It’s like because it’s a new experience. It’s something that you kind of have to spend a little time thinking about and and. Kind of grasping the consequences of it. So Yeah, I’m still thinking about what I saw. Really cool.

Andrew Thompson: Yeah. Thank you. Hopefully it prompts to discussion and whatnot. We can figure out what we want to do next or that.

Bruce Horn: Absolutely.

Frode Hegland: Danny, I was just telling Andrew how brilliant I think his updated work is that we’re going to go through. And Bruce had a look at it earlier today when we had a chat.

Dene Grigar: So I was going to say Andrew is a graduate of my program and worked on Rob Swigart’s portal project, which is a VR adaptation of his game and a brilliant, brilliant Person been working in my lab now for, I want to say 3 to 4 years. I grabbed him up when he was an undergraduate, and I’ve kept him as long as I can possibly keep him. I may not be able to keep him forever, but I’m doing a pretty good job. One more thing I want to say, Andrew Andrew’s house sitting for me when I go. I’m going to a funeral this weekend. Andrew, do you eat shrimp? I forgot.

Andrew Thompson: I eat shrimp? Yeah, shrimp is good.

Dene Grigar: I’m making a big pot of gumbo tonight, and you’ll have some left over for you when you get here.

Andrew Thompson: Oh, wonderful. Thank you.

Dene Grigar: Good. All right, all right. That’s the business. I had to get out of the way. Thank you everybody. Good morning.

Frode Hegland: That’s pretty cool. Credit card, so I don’t know who’s coming. Randall will probably come in late due to being in California, which I know you are, too, Bruce, but you’re here on time. Denny, do you want to start with the agenda? Everybody put the agenda as a link in the chat. If you haven’t seen it, I’ll paste it again, because sometimes it doesn’t update, you know?

Dene Grigar: Hi everybody. So yeah, I’m meeting on March 27th with the CEO of the Murdoch Trust. Not to be confused with the Murdoch family in the UK. This is a local billionaire. That’s a $2 billion trust fund that gives money out to art and education, and has been a big supporter of my university for years. So we’re going to be holding the symposium at their location, which is right on the waterfront of Vancouver. Bruce, I don’t know if you know anything about Vancouver, Washington, but it sits across the Columbia River from Portland, right?

Bruce Horn: Yeah, absolutely. Absolutely. I mean, I live I live in Portland, Seattle and Bay area, so I know all those areas.

Dene Grigar: Well, you probably lived in Vancouver when I was when I first came here in 2006, it was a sleepy little town with one decent restaurant. It’s totally different. In the last 18 years, it’s morphed into this kind of metropolitan area, and we’ve had the growth of creative technologies here, and the waterfront has built out of this kind of old we were paper mill, a lot of paper mills and stuff like that. That’s all gone. And it’s turned into this gorgeous. It looks like Vancouver, BC’s waterfront now.

Bruce Horn: Have you been to twigs, the restaurant?

Dene Grigar: Yes. Yes, I love twigs.

Bruce Horn: Yeah.

Dene Grigar: Which is right there. Yeah. So anyway, it’s going to be at the waterfront and the Murdoch building is right on the. Yeah. I’m sorry.

Frode Hegland: I’ve been to twigs too, you know.

Speaker3: Yes.

Dene Grigar: Of course I took him there, but. Yeah. So it sits right there on the waterfront. The event is the event venues. On the seventh floor. There’s a balcony overlooking the Columbia River. You can see the boats. It’s really romantic and beautiful. And the space is in. Unbelievable. We have one of the space that Saturday. And so I am going on the 27th to finalize the contract with the CEO. They’ve donated the space to us. Which is great because we only have $3,500 for the conference, and that doesn’t go very far.

Frode Hegland: Yeah. It’s fantastic.

Dene Grigar: So happy. What I’m going to do once, once I get this finalized and I have the contract in hand, I’ll start going around town to see if I can get some funds to to pay for the breakfast, you know, sponsorship for breakfast. Lunch. Cocktails, stuff like that. So first step, get the venue on contract.

Speaker3: Yeah. It’s.

Frode Hegland: Yeah. It’s perfect. It’s it’s going to be a good three days. As in one main day evening before and day after. Yeah. It’s going to be brilliant.

Dene Grigar: And then if I was having lunch with Tom Standage from The Economist. Tom’s a friend of Frodo, so it’s not an official interview, but he’s going to start getting the word out about the project, showing the project to Tom, and I’d love to see an article in The Economist. Hopefully it’ll come out of this.

Frode Hegland: Yeah. So my relationship with Tom is he’s quite a close friend. I’ve known him for ten, 15 years. Not as long as you, Bruce, but, you know, getting close. I sent stuff to them. I when I was trying to do things way back then, I sent this massive letter, physically huge, just to get a bit attention to some newspapers and, and magazines. And of course, nothing happened anywhere except one. Somebody from The Independent wrote back a long email how annoyed he was at it. But then I was in Japan, in Kyoto, at the science and Technology in Society. And opposite me at lunch was Tom, who then introduced himself. So we’ve been friends ever since. He’s a good guy. He was the technology editor. Now he’s the deputy editor of The Economist. So the reason I’m saying this is a little bit for you, Andrew, that what we agree on today should probably be polished based. So I think what you have for today is really, really good. But when I show this to Tom, because of course he’s going to be wanting to have a look at the vision and of course he’s going to say, so what are you guys doing with this loan thing? So of course I’m going to show him this updates. So that’s why it’s there, you know, now as you’ve known from last week. But put an agenda.

Dene Grigar: You can invite him to come to our symposium.

Frode Hegland: He has actually presented at one of them. He’s been to at least 2 or 3. I’ll absolutely invite him to come to Washington. So that’s one of the things that is also on the announcement. Dean and I are, of course, working on the invitation draft and all of that stuff. So we’ll share that with you. We hope next week, barring any insane workflow, plus the travels that she unfortunately has to do right now, I’m.

Dene Grigar: More concerned about not having money in the government to fly back home and have a TSA person go through my material. I hope that gets worked out. I told the students yesterday in my class, I said I’m I know I can get there whether I can come back or not. It’s another story. I’m packing as if I’m going to live in Asheville, North Carolina for a month till we get this sorted out, so we’ll see what happens. Bruce, thank you for joining us today. That’s in the agenda. So appreciate you being here. And then as Froda mentioned, we’re working on the book and symposium call. And I hope to have that worked out with him by next Wednesday.

Frode Hegland: So, so so most of you kind of know Bruce, but I’m going to introduce you to Bruce. Excuse me. Introduce Bruce to you guys. Anyway, I owe Bruce friendship and admiration and everything you can pile on top. Bruce was the first guy who actually took me serious within this line of work, and that really made a huge difference at that point in my life. About 20 years ago, when I was with Doug and all of these things. And for those of you who don’t know, shame on you. Bruce wrote the original Mac Finder, plus more. So when we were in California celebrating the anniversary to be there with you, Bruce. Yeah, not a highlight of my career, a highlight of my life I was so grateful for for the time we spent there.

Andrew Thompson: That’s great that you came.

Bruce Horn: And by the way, we met before you got involved with Doug. I think before you started to get involved there you were. You. It was a long time ago. I don’t think it was more than 20 years ago, but I remember we met in San Carlos. You had emailed me out of the blue somehow, and I thought, why not? Well, we had burgers.

Frode Hegland: Our first date was burgers.

Bruce Horn: It was great. It was totally.

Speaker3: Great. Anyway, yeah, that’s right.

Frode Hegland: I emailed you out of the blue because a tidbits article you wrote. You say the reason I’m excited for Bruce to be part of our community. Of course, he has been coming in and out and now also come in and out as time allows. It’s a little bit that he’s working with at Apple on AI things which just mind expanding things, but it’s more his perspective. He redid the The Finder many years ago, called it first I filed and also context, making it a much richer environment. So Bruce shares perspective with us just to such deep degrees and his despite looking so young, he was actually at Parc Xerox Parc when Steve Jobs and them went in and had a look around. So his knowledge of the history is just so deep and can really benefit us.

Bruce Horn: It was a long time ago, a long time ago, and the 40th anniversary. Yeah. In January. That was pretty interesting. You know, I have to say, if you guys you can watch the watch the Computer History Museum video. It was fun. I mean Bill Atkinson did most of the talking, but it was just a great experience. And, Frodo, thanks for coming. I know it’s a long way, but it was great that you came.

Speaker3: Yeah, well, you had to.

Dene Grigar: You had to come for the onboarding for the Sloan project. So he killed two birds with one stone.

Speaker3: Three birds?

Frode Hegland: Three birds also picked up the vision.

Dene Grigar: That’s true. And then.

Dene Grigar: When when he comes back in the fall, he’ll probably go up to Seattle and visit some folks up there. So he’s he’s using this as a good opportunity to, to kind of connect, reconnect. So.

Frode Hegland: Right. So that’s all of that. Anyone else have any other announcements. Oh I forgot to put any other announcements on the agenda. But you know.

Speaker3: I, I.

Dene Grigar: Want to I want to mention one more time that I learned from Klaus Eisenbach that there’s 15 seats that Sigweb is funding. Mark Anderson. Listen to this. 15 seats for that graduate program that Klaus and I are building. And so, as I mentioned on Monday, I’d love to be able to get the word out about these. And these are fully paid travel accommodations access to the conference workshops and then the school itself, the actual training. It’s all paid by Siggraph Seaweb. I mean, that’s 15 seats. So I’d like to be able to get some VR spatial computing people involved in this. So once we get that call finish, which is in the works currently, I want to get it out to you and Marc. I love for you, Marc, to spread spread the word. If you have any academic connections, people in graduate programs because it is an opportunity. I can’t even imagine the it’s about $75,000 with.

Speaker3: Yeah.

Frode Hegland: This is huge. Of course, what we’re doing now in this group will be very, very relevant to that, you know, turning references, hypertextual and stuff like that. So everything. Yeah. All intertwined. So. Well, now we just have to figure out a way to bribe Klaus to just get my damn thesis approved. But that’s a whole different discussion, right? Thanks for the link.

Dene Grigar: You know how I feel about that.

Frode Hegland: Right. Okay.

Speaker3: So.

Frode Hegland: I just clicked on your link first to put it in the background. As it were. Right. So yeah. Anybody else have any updates or things to mention before we dive in? Okay, let’s dive in. On the agenda, please click. Click on Andrews updates. Andrews going to take the floor now. I’m also putting it direct link in here. So this page has both the A little video and description, but please, Andrew, take us through it. And if you do have a headset nearby and if you haven’t looked at it already now, it would be a good time to do so.

Andrew Thompson: Sure, I’ll go through these in kind of topics and we can have discussions on them as we go, or we can just let me breeze through it and then chat afterwards. Whatever works for people. I guess raise your hand when inspiration strikes. Yeah. There’s little video showcasing stuff I can’t really screen share it because of bandwidth, but it’s right on the link. The first big thing is the citation text block that I introduced last week. Now, instead of it just being a flat panel, it kind of curves slightly. It’s a lot more pronounced when it’s close. And it also can swipe left and right as well, so you can actually move it around. And for those of you who haven’t tested it, that’s done by extending your palm. So this would be left and right and this would be up and down instead of just kind of giving it a freeform movement, which is. Kind of feels like it’s stuck to your hand. Wanted to keep it kind of locked to an axis. That can of course be debated, but it made the most sense when I was adding it at the time. But if you’ve done the previous Wrap. Tests come a couple of weeks ago. This is basically just that re-implemented with two different functions at once. Caveat it still only works with one hand, aiming to have it with both hands eventually. But I decided to stop sort of wasting development time implementing all that and just focus on test the concept on one hand. And if it’s good, then then we’ll mark it down and try to add it to both hands.

Andrew Thompson: For people who are left handed. So Dini and potentially anyone else you can. Oh, and, Bruce you can swap the hands just because of how the headsets load. By like, loading in the the unit like this and then pulling your other hand in and it’ll swap the, the order that they load. Because the headset doesn’t actually care left versus right, it just assigns the left hand to the first hand it sees. So you can do that trick to change the dominant hand. We also have a, a sort of new slider added to the test menu, which is right there on the main. So if you press tap and the menu pops up you can slide that back and forth and it will adjust the reading distance of the text block. So when it’s up close it wraps tighter. And when it’s further away it kind of flattens out and goes off to the distance because the font size is so small. It’s kind of silly to put it far away, but it is technically an option, and it’s useful to kind of imagine what it would be like to have some text further away. Perhaps we could also scale up the font size or something like that. We can talk about if we want the scaling to be automatic, or if we want that to be something that the user can control themselves. Either might be nice. Just depends on. We don’t want to overload with a bunch of settings. And we do want to have multiple sort of, like, distances at once.

Andrew Thompson: In the end, that might be something good to sort of tinker with soon. We’re like. I picture it as, say, at least three different set distances that you can place text on, and they would all sort of move together with this slider. You’d probably have instead of just one slider dot, you’d have two, one on either end. So a minimum and a maximum. And then the one in the middle would just average between the two. We’ll, we’ll test that. But that’s kind of the vision for that part. And then this is kind of more of a extreme test. We have a text selection tool implemented. We talked about how we want to have different tools that you can assign to different hands and swap between them. The swapping doesn’t do anything yet. So we just have the selection tool by default assigned to the hand. And you won’t see it when you have your hand out, but as you pinch your hand together, it’ll kind of, like, show up, almost like a pencil in your hand. And if you point at the text, you’ll get kind of a little laser dot at a line. And if you squeeze all the way, it will start selecting text. And you can release and it will stop selecting text. And all of the selected text will get output into the console log in the browser, which is useless for you guys right now, but it just kind of showing the concept that it works. There’s a lot of bugs with the selection. And I’m going to try to polish it, but it’s it’s giving me a lot of headaches, so I couldn’t get it all refined for today.

Andrew Thompson: So if you find those bugs, I’m aware. I’ve got them listed here. I’ll briefly run through them. If you’ve already moved the scale of the the text block, and you try to select, you’ll notice that the highlight is not even close to lining up. Whoops. I kind of figure out what’s going on there. Yeah, it desyncs itself very fast if you have it at the default distance. So you can like reload the page and it’ll go back to where it’s supposed to be. The text selection pretty much works correctly. But it doesn’t line up with the actual selection. If you do far away, it should still be selecting the correct text and should be outputting the correct stuff in the console. But it just it doesn’t visually show it because they’re two separate components. It also gets very laggy if you start selecting lots of text at once. If you like grabbing and pointing all the way down, the frame will drop horrendously. And that’s just because I programed it in the grossest, most overcomplicated way. It’s so inefficient, I know. I’ll work on it. And then the last bit isn’t really something that we have control over, but the finger tracking, if you don’t have good lighting where you’re testing, this gets really jittery. So you’ll notice the hand is kind of like doing this. And that, of course, makes gestures very difficult to, to trigger properly. So if you’re trying to squeeze your notes, sometimes the highlight cancels itself out.

Andrew Thompson: Like, it just turns off for no reason. It’s because it lost track of your fingers for a moment and they sort of slid and it disabled the highlight. This is a lot better if you have good lighting, and it’s a lot better if you have your hand kind of rotated so that the headset can see all of your fingers. If you have it like this, sometimes it doesn’t really know what it’s doing. Not a lot we can do about that. It’s just something to keep in mind as. We just I guess as VR improves in technology? We’ve seen talk that meta wants to make a like a wrist scanner that basically tracks the fingers for you and then sends that information, which would be a lot more reliable. And that’s, of course, years out. I don’t know if they’re even developing it yet, but there’s talks. So people are aware of these issues. And then finally. I know I’ve just been monologuing for a while. Started early background work on extracting information from an ACM paper. Like we talked about. But then there’s nothing to really show for it, except if you check the console log when the page first loads. It’s a big string of numbers which mean nothing, but they are citations in order. They’re found in a document. So it just shows that, yes, we have a way of extracting information from the HTML papers, which works a lot better than the PDF ones. So amazing. Thank you so much, Mark. Yeah. That’s it for me.

Frode Hegland: It’s fantastic. It’s fantastic. Okay, I have many comments, so I should go last.

Dene Grigar: Mark.

Speaker3: I.

Mark Anderson: Just try it really nice. Super. Couple of things I did notice is because I’m sitting down, I’m using probably quite close in boundary that I couldn’t actually get the hand. I don’t know where because I’m sitting too close. I suspect that’s what it is. But if I push back, then I go out the back of the boundary. So, you know, kind of first of all type problem anyway, so that doesn’t, I think, detract from things. So also seeing what you’ve done on video, it’s very interesting. The the other thing I noticed is that if I just hold my hand like that, I didn’t think my finger was that shaky, but I could see the pointer the other end. So whether there needs to be some dampening on that, because basically if you’re pointing I don’t know if that’s a thing one can do, but essentially, if we have a way of taking out the the natural degree of tremor. That might be something to consider, because I suspect I probably have to tell me how that’s done, but Yeah, that’s just that’s the other thing I noticed. But otherwise. Thank you. This looks really interesting.

Andrew Thompson: Yeah. The hand swipe distance. Very good point. I kind of realize I built it for my setup so I can reach out and do that. Yeah. Of course, that’s not going to work for people who have the monitor closer. So we can sort of tinker with that. Whatever’s comfortable. We don’t want it to accidentally trigger swipe when you have your hand just right in front of you. But then, yeah, we’ll mess with that. Maybe instead of distance from. Distance from the panel, its distance from the headset. So if your hand is held out far enough from your head, it triggers rather than if your hand is close enough to the text panel. Yeah, that.

Speaker3: Might be because.

Mark Anderson: I’m seeing generally in the question. One thing I’ve noticed is if I have the because I’m using it, I mean, I’m at plus three in my prescription now, but otherwise unchanged. Is that fine enough with the headset on. If the if stuff’s actually fairly close in, it’s really quite readable. Which doesn’t play well with hand tracking that actually wants you to be really wants it wants. So I find myself trying to do this, which is interesting. Interesting. Observe observation though obviously the ability to read the stuff without glasses or without lots of extra correction is is a is a useful facet. But again, so something of a first world problem at this point because I appreciate the main thing is just, you know, to get the, you know, get the basics going and we can, we can we can fiddle with human inadequacies after we’ve done that.

Speaker3: Thanks, Adam. There’s a.

Dene Grigar: Comment.

Adam Wern: Yeah. On on the hand or finger tracking for the for the kind of the laser pointer selection, I think. I moved away from attaching it to the finger to it, to actually attaching it to the wrist or the middle of the hand, because that one tracks not perfectly, but much, much better. So it will feel like a wrist laser instead. And that is as a finger both move naturally the microtremors, but also the attraction tracking is. It’s a bit hard to average out. You can do that, but even that will move it around a bit. But the wrist tracking is almost rock solid, so if we if we’re okay with having a wrist pointer instead going from the center of the hand that’s that is so much better. Then you can also pinch without moving your pointer because the wrist is more, more fixed when you pinch, but the fingers move when you pinch. So we should consider or try it out a bit more. Andrew and I have discussed that. And I think it’s fun to start with the most ambitious version to actually track the fingers out there, and the tech will at some point improve as well. So but but it could be a way to solve it for now, to move it back to the center of the hand. Yeah.

Andrew Thompson: So unfortunately, Adam like, you’re totally correct, but I have bad news. It is attached to the wrist. It’s just moved up slightly, so it visually appears up there. It’s attached to the wrist. So all of the shaking is still coming from the wrist.

Adam Wern: Which is, but then rough. Then we should. We have to look into that because I have prototypes where it’s kind of more. So we will find a technique to make it more solid than.

Andrew Thompson: Yeah, that’s definitely on my list of things I want to do. Potentially just disconnecting it entirely from the hand and having it try to smoothly sort of follow the hand, though. Then you end up with the really annoying latency thing where you’re like, just go over there and the pencil’s like, ha, take my time. And that drives you crazy. So I guess we got to figure out what’s the best.

Frode Hegland: Guys, I have to go downstairs and open the door for someone. Please put in the notes. If there has been a decision on anything, I’ll be as quick as I can.

Dene Grigar: Adam Joyner responded that.

Adam Wern: No, but I think we have to experiment a bit here and we will come up with a solution that is more stable. We will solve solve it. And there are many ways forward, but they take take some effort to do them. So first is good to nail the interactions and then work on the exactness of it. Yeah that’s finger.

Andrew Thompson: Tracking has to be improved. Like that’s not really much of a question. It’s it’s unusable right now if you’re trying to like actually grab single bits of text, it’s just all over the place. It’d be silly to have to pull it all the way close to you and then select stuff and then push it away. That’d be an obnoxious extra step. We could look into kind of like maybe a magnifier, but. It’d be nice just to get the tracking smoother in general. Yeah, we’ll talk about it.

Dene Grigar: Peter, do you want to make a comment?

Peter Wasilko: Yeah. One other possibility might be keyboard based selection. If we sort of, like, resurrected the leap key system from the canon Cat, and you could pop up a virtual keyboard and be typing and use the leap keys to jump around and navigate instead of trying to point with finger.

Andrew Thompson: Yes. So, Peter, interesting thing about keyboards, like the virtual keyboards in VR. I take it you probably haven’t played with them yet. Not that they are kind of the most annoying thing ever made. It’s like the obnoxiousness of first learning to text, but then placed really large so you can’t ever get the speed down. It only tracks your index fingers. The rest of the fingers do nothing. And that’s to avoid just hitting every button because there’s nothing rest your hands on. It’s just sort of there. So you’re sitting there doing this to type it’s like the worst hunting pack. So I suppose if you’re using keyboards as.

Peter Wasilko: The keyboard, though, if you were using a Bluetooth keyboard.

Andrew Thompson: Yeah. So that VR.

Peter Wasilko: But if we were in a, we should have like a Bluetooth keyboard mode optimized for that.

Andrew Thompson: Yeah, that would be really interesting. We could definitely look into that. Having a physical keyboard. Linkedin would be good. You probably won’t be able to see it. And I think it’d be pretty difficult to track that on with the amount of time we have. But like if you just link the connection, that shouldn’t be too hard.

Dene Grigar: We can I respond to that quickly? When I think about working on the mobile device, I don’t use my fingers. I use my thumbs. And most of the people I know that work on phones don’t use, I mean, we the thumb is a major. You know. Appendage that we’re using to film with. So. We’re used to doing hunting and pecking, so to speak, with our thumbs. There’s a good chance we’re going to get used to using our fingers for the same thing. That’s what I’m what I think we should think about.

Andrew Thompson: Yeah, but the keyboard is so large it’s never fast. And if you make it smaller, it’s not doing it.

Dene Grigar: I mean, Apple made us made the change years ago with a mobile phone. I mean, we’re not using the Qwerty keyboard anymore and using all of our hands. We’re we’re typing like this, right? So I’m imagining that the shift to the visual, I mean, just theoretically speaking, the shift of the visual world is going to be another, yet another shift away from keyboard experiences so that we don’t necessarily need. And I don’t know if you use the talk aloud, but I was using my voice to make the text happen, so we probably won’t be doing much typing at all in the future, but speaking. And then we can go back and refine our words. If we wanted to download this information and then, you know, turn it into an academic article. But essentially we’re going back to an oral environment. We talked about this before with with the second orality. I wish Brandon Brandon was here, but we’re moving to a second orality where we’re we’re speaking our words again. It’s that kind of Back where we were thousands of years ago. So I don’t think we want to waste a lot of time on keyboarding.

Speaker3: To be honest with you.

Adam Wern: And the same goes for text selection. We have such a we have locked ourselves into thinking that we should select characters by characters dragging a thing over, and it feels so backwards to me. We want to have phrases or words or whole sentences or paragraphs, so it should be enough to kind of point toward or point to a sentence or a paragraph, and then being able to shrink that selection or expand it, but not hunt single glyphs. It feels so backward to me. It even feels backwards on the regular computer where you have more precision. I think we should leave that behind in XR and work for better text selection system.

Speaker3: I don’t know how to put.

Bruce Horn: My hand on this, but but actually, you know, the kind of verbal selection is some stuff that’s starting to starting to happen, you know, like saying, you know, select this and you’re actually you can point as well as you could just speak like select the first paragraph or move that second, second sentence to the first or things like that. You know, it’s going to be a lot easier to do things like that. You know, as long as you’re within an environment, you can speak to it. But yeah, actually I ran into something and I’ll have to find it for it. I’ll find it where somebody had done a whole, a whole new design of of selection by speaking. It was very odd to start with, but it seemed pretty powerful. I’ll find I’ll get it to.

Dene Grigar: I was going to just to follow up what I was thinking about with using the finger. I can imagine that I’m looking at a text and I want to just highlight a specific part of it. So I just run my finger across it. And then that appears in the document that I’m working on, right. So I can see the use of digits, you know, in particular pieces. But I think for creating new work, we’re going to be speaking. So when I move the text. So here’s the process. Right now we’re talking about how academics do things. Mark. I highlight a reference, move it to my new text, swipe it over, move it over. Then I speak the annotated bibliography. I speak what that’s about in this article. You know, Jay David Bolter talks about the way the mind is being shaped by the tools, specifically the use of computer technology over the, the writing you know, to traditional writing methodology. That’s writing space, right? And that goes right on that annotated bibliography. Okay. Now I pick up another reference, drop that in, speak my notes. And so I could see that kind of process happening. There’s a a term that’s used in theory called the logos syndrome, which is a the kind of way we birth ideas and the logos is birthing through your words and in the tradition it comes from. It’s an oral tradition. So we’re moving back to what’s called the logos tradition, which is totally different than what we’ve been experiencing for the last 3000 years of textuality with writing. So I think it’s fascinating to think about. There’s an article there. Peter, you had your hand up, darling.

Peter Wasilko: Yes. Has anyone else used stenography? Keyboards? We might, with hand gestures, be able to get the equivalent number of. Chords entered in. And I’m not sure how it worked, but if you could touch your thumb to several other fingers in combination, we might be able to achieve an analogous interaction mode to using a steno keyboard. And based that on the sounds with the phonetic lookup dictionary to expand that into full spellings. And let me just drop in. For anyone who hasn’t seen one, here’s a picture of one half of a split steno keyboard. And there’s already software for the Mac from the open source stenography project. That allows Macs to use both a regular keyboard and a steno keyboard simultaneously. And that software is called Plover.

Peter Wasilko: So we should take a look at that and see whether we could bring that into the VR environment.

Speaker3: Thing.

Peter Wasilko: Is, you might, like, touch your thumb to one, 2 or 3 of the other four fingers or all of them together. You have two hands for that. And bringing chords like that, I’ll have to give it more thought, but I think it might be doable.

Frode Hegland: Yeah. I don’t think we should invest in further hardware at this point, but we should definitely. Those of us who want to add them should definitely play with it. If if there is an interest in headsets are available. Don’t forget the interactions. The hand gesture thing is actually quite rich, so there’s a lot that can be done already there. Yeah. I’m sorry I had to go and open the door downstairs when we’re talking now, it was primarily about selecting text on screen, right.

Dene Grigar: Well, what you missed quickly. And then we’ll go to Andrew. We talked about the fact that we’re leaving, typing behind, and that we could use voice recognition. That’s getting better. And it’s working really well in the Apple head headset. Right? And we don’t necessarily need to worry about keyboards. Right. And that we already made the shift thumbs with the phone. Hunting and pecking with fingers is just around the corner, right? It’s not that big a deal. That’s how I type, anyway. That’s how I type. I don’t type like a secretary. I type with hunting and pecking and I will, and I’m very fast at it. And spell check is my best friend. So, you know, there’s there’s ways to get around that problem. Write a comment really fast.

Speaker3: But.

Frode Hegland: Hang on on the keyboard thing. Are we talking about for selecting or for adding text for for writing?

Dene Grigar: Basically we’re talking about, you know, the idea of getting a Bluetooth keyboard and using that for augmenting our. Interactions with the headset and the answers. We don’t need that.

Frode Hegland: Okay I’m not sure if that’s so black and white, because I think it really depends on the actions being done. You know, you do use a different part of your brain when you speak compared to when you write. So I don’t think especially for the next phase of the project, we should completely disregard writing. It is it is a very different thing. But when it comes to selecting stuff like for this part of the project sure. You know, that’s that’s quite a.

Speaker3: That’s quite a guess what I’m saying.

Dene Grigar: In the future, Frodo, what’s going to happen? Yes. I think that we’re in a transitional moment. Absolutely. There’s still keyboard on the phone. Right. But in the future, yeah, I’m imagining in 20, 30, 50 years, our grandchildren are not going to be using keyboards in the way that we’re using them now. And we’re moving so far ahead of the court, you know, keyboard of the Macintosh that I’m sitting in front of right now. Right? So, you know, we we’re yeah, of course we have to think about it now. We have to use them now. But we’re not going to invest a lot of money in making sure the the keyboard works, you know.

Speaker3: Well, I’m.

Frode Hegland: Not so sure about this. I mean, we definitely need to look at handwriting. In the environments we’re working on, handwriting also uses a very different modality and different parts of our brains. But I really do feel that speaking. I mean, look at our transcripts. They’re all very, very hard to go through because speaking and writing are two different things. So when it comes to authorship, if you have to. And the one of the things I think Apple’s doing really well right now is you can mix speech and typing. You don’t have to constantly flick in and out. That’s a really important thing. Absolutely. Think, Danny, what you said about adding notes by speaking is fantastic. So please don’t think I’m saying speech is bad and all of that stuff, but what I’m saying is for more long form writing, it is quite a different thing. So we shouldn’t. I don’t think the keyboard is a technology to automatically assume obsolescence over Adam.

Adam Wern: When it comes to voice input, I think we should also consider much more conversational interfaces where we refine our text especially in the transition period where where you can’t really trust it 100%, just maybe 95% or 99%, then we need that refinement and correction to what we’re saying, but also for, for ourselves that we could speak a first draft and edit it with, with, with speech. When we write an ordinary text, we have a good method of a kind of a pointer and deleting things. So we have an evolved process for that. But for speech, if we want to correct past speech, how do we do that? And it’s for example, when you converse with theory, it can be quite irritating when you can’t converse there to correct it or say what you really meant. So we should really in the text environments, we should really think of, how can you do a multi-step process where you point with your hands and speak to that pointer, and to say that this word was wrong, replace it with this word and so on. So it’s more conversational in that sense, back and forth with the computer.

Speaker3: Andrew. Oh, sorry.

Dene Grigar: I was going to say Andrew was going to say something. And I think, Bob, you had your you’re going to say something. So did you did you have something to add to that, Andrew?

Andrew Thompson: Yeah, I just didn’t. It’s a bit ways off now. I just didn’t want to lose what Peter was saying. Because I think most people interpreted that as he was suggesting another physical keyboard. And the way I took it, he was suggesting taking inspiration from the split keyboard to come up with our own gestures without a keyboard, which has a lot of merit. Unfortunately, I do think it might be a bit tricky to do if we’re inventing our own. Basically, it’s a new sign language. But the idea is, is, like, really good, so I didn’t want to lose that.

Frode Hegland: Yeah, that’s really important. And that’s what I tried to get us to do over the last two years. But before we had the opportunity to test, it was pretty much impossible to do that. And while we have to acknowledge the operating system for Meta and Apple, it is also our responsibility as a research group to try them new ones. So what you have now with this to go up and down, you know, to scroll up and down and this to the sideways as an example is absolutely phenomenal. So we should definitely have our own but not too many because, you know, that’s the question of what. You know, Bruce was completely new to the group today. I told him a few gestures and instantly he got it. You know, that’s fine. Beyond that, as in Bruce, now you’ve got to put your do a Spider-Man gesture. And do you know, obviously the higher level. So I think we’re all in firm agreement on on that.

Dene Grigar: Bobby, do you want to say something? You had your hand up earlier.

Frode Hegland: I did. Maybe you’re just waving.

Speaker3: But drowning.

Dene Grigar: Mark, my friend.

Mark Anderson: Yeah, just just a quick one. And sort of. Another sort of thing in the mix is to bear in mind that if we’re going to for instance, come up with some fairly rich things, like a new form of coding is to actually sort of reflect on actually what the learning time for that is. So learning a few gestures, not too hard at the moment. There’s a bit more fine motor skill needed. It’s not that it’s not that the it’s not that the system doesn’t have merit. But the question is if people can learn within the time they lose interest in learning is is is a genuine challenge?

Adam Wern: And following up on that, I think we Excel has excellent opportunities to put kind of the half tutorials or in their guides that are near you, you can have a ghost hand that shows the the swipes for you as a real movement, not somewhere else in a tutorial or a book somewhere, or a health manual, but to actually having a small ghost hand, showing those and reminding you of the actual gestures with the real, in a sense looking hand floating there besides you. So I think having one of those would be very interesting for us to show how it could be done. Yeah.

Frode Hegland: I think that is a really important point. And I think for instance, my Friday lunch meeting where I’m sitting with my friend, I should pay attention to how much I need to tell him. Because just onboarding with a Vision Pro actually takes a bit of time. You know, put your hand, you know, all of that stuff. I think in our environment with the experience with Bruce today, which was only a few seconds, well, a few minutes, rather sorry. There wasn’t that much I needed to tell him, but, you know, I needed to tell the palm up and down, the palm sideways. That’s fine. So when we show these to our friends, that’s actual testing. So if we learn that some things they get instantly, that’s great. We can show it quickly, but more advanced things we may have to do the kind of ghost guide kind of thing. So, Danny, your note here about you’re noticing anecdotally they’ve lost the ability to keyboard using their thumbs. Yeah, absolutely. That is very, very important. The reason I learned to touch type was I was sitting in San Francisco, Starbucks hunting and pecking and someone behind me and said, what, you don’t know how to touch type? And I thought it was a friend. And I looked around and it was a random person. I didn’t know who it was. It was such a strange experience. I just forced myself to do it. So yes. No question. Danny. Keyboards are not going to be the norm. And the hunting and, you know, the two finger tapping is not necessarily a bad thing either. You know, we can have a virtual thing in space for that as well. The current one is obviously ridiculous. So 100% we need to keep innovating in that area.

Dene Grigar: But my comment that I think you missed was that we, we become Inculcated with the idea that we’re not using ten fingers all the time, that we’re shifting a lot of our communication skills to thumbs. Yes, conversation. Just let me finish conversation we’re having. When you were gone.

Speaker3: I just said.

Frode Hegland: Yes, Danny, I didn’t interrupt you. I just said, I’m listening. I said, just said yes. Okay.

Speaker3: Yeah.

Frode Hegland: Please finish.

Dene Grigar: The conversation we’re having was talking about how it’s difficult to type with your fingers. And I said, but we’re already doing that with our thumbs. What Apple has already done is get us to move from this to this. And now what it’s getting us to do is move from this to this. So the transition is been this, this, this. And this.

Speaker3: So yeah.

Andrew Thompson: Except the keyboard hasn’t been adapted for that, right? Like when you started typing with your thumbs for texting, they shrunk the keyboard down and added like little like autocorrect options. And you can, like, swipe around rather than lifting your thumb. They put in things that actually support that here. You just have a giant keyboard and they just removed functionality. So it hasn’t been adapted in any way that’s useful yet.

Dene Grigar: You’re becoming.

Speaker3: It’s going to.

Frode Hegland: Be really, really interesting as we move into more of an authoring part of this, because typing in thin air in different ways, of course, is not as ideal as having something to touch. And even if you have something to touch, you all know about the laser keyboards and tables, right? If you don’t have feedback when you just touch, even though there are outlines on the flat surface, it becomes harder to hit them. So it’s going to be really there’s going to be really good, I think. And somebody said that the multimodal where you can speak a little bit, touch a little bit, type a little bit will be be brilliant. But right. And so we’re kind of on the text selection tool still right in our general feedback. That general thing. So one of the things I’d like is the ability to turn this text selection tool on and off in a clearer manner. So I’m wondering if maybe we. This is just a suggestion to the group. You tap the whatever is your prime hand. You tap that to get the text selection tool. Or is it and is the quality of hand recognition enough that if you point with one finger, it comes up?

Andrew Thompson: That’s going to let people answer that question first before I said anything, but Maybe nobody has anything. If I’m understanding you correctly, you’re saying. Because right now, it like it manifests when you sort of, like, bring your fingers together. But obviously that means the tool is active. So you’re saying you want it to not be active at the start? And you have to, like, turn it on.

Frode Hegland: What I would like personally is a couple of things. I’d like you to change the visual design, because right now it looks like a yellow highlighter, of course. So of course that’s a bit of a detail, but that’s what people think it’s for immediately. It’s not selection necessarily. So that’s that’s part of it. But the other one is ideally I would like you to come on when you point your finger and then it’s automatically there. And it is a selection thing. But it shouldn’t. Right now it seems it’s a bit there.

Andrew Thompson: Okay. But if if you point, how do you then select from the pointing?

Speaker3: That’s right.

Frode Hegland: That’s a big question.

Andrew Thompson: I feel like that’s back to our initial gesture, which was you point and then you tap to select. And it didn’t work. Well.

Frode Hegland: No because then your whole pointing finger moves.

Andrew Thompson: Yeah. So like we’ve, we’ve tried that with like the tapping the side as our select button. But that often lost tracking because of how the headset could see your hand. And it was that some of the initial tests the laser comes out of the finger when you point and then you tap to select it. It’s almost identical to what we had. And I know we didn’t like it, so I don’t know if we want to go back that direction.

Frode Hegland: So before Adam, I see your hand just really briefly, but considering this is now from the hand, not the fingers, right. The actual arrow, so to speak, if we do have a tap gesture that produces it and does the lasering and then we have sorry, you tap the hand to make it appear to turn it on, and then you point about and then you do a tap with the active hand to do a selection. And then if you want to get rid of it, you tap the hand again. Does that make sense?

Andrew Thompson: Yeah. I don’t think you want to be tapping both hands, though, and have them do different things. I know we talked. Well, of course I can be wrong. I’m just a developer. But we talked about before having tool buttons in the menu, which is like the point of the menu. So you open the menu and you select the tool, and now you have the tool active in that hand. So perhaps we can just go that route. And then it’s it’s going to always be there as long as you have the tool selected. But the tool is not there if you have your hand open right or something. So it’s just like the hand is now looking for that tool.

Frode Hegland: I agree with that to a huge extent. However this is the interesting thing about making something super easy and some things not as easy. The act of selecting is such a human thing. I don’t want to have to turn on a tool even though I’m saying it. I want to be able to grab to be able to select. I shouldn’t have to go to my tool chest, so to speak. That should be as quick as possible to be able to do other things like scaling, which you have as a slider. I think that makes a lot of sense, but just to move it, you know, move it there, move it there, and then I want to select, I don’t know, what do you think, Adam, who’s been patient

Adam Wern: I’ve played with pointing just to start the tool and flat hand to drop or to, to get to remove the laser. So pointing starts it or and having a flat hand removes it and in between it stays in the as it was before. So if you go from a flat hand to a natural position, you won’t get a place finger. But when you start it by pointing, you get the laser, and then you keep the laser until you kind of flatten your hand a bit, like you’re dropping dropping the tool. And that felt that version felt good to me in my prototype. But but it could be just me.

Speaker3: How do you.

Frode Hegland: Do the action once you have the laser, then? So you’re pointing at a reference, for instance.

Speaker3: Yeah.

Adam Wern: So as I’ve attached it to the wrist, it wasn’t that important at that point. I had a kind of a this kind of clicking, so it was different. But we can still have the The pointing is just starting the laser pointer to exact the exact same interaction as we have right now. Pointing is just turning on the selection tool and flat hand drops it or removes it so you don’t have a laser from your hand all the time. That could. Also, that is also irritating in some circumstances. So you want to drop the tools whenever you want.

Frode Hegland: Okay, I think. Let me see if I. So I think this is what you said. Point to turn on laser pointer. Then you have the laser pointer. Tap to select point again to drop laser.

Adam Wern: No flat, flat hand to drop the laser pointer as as in, you drop it in the floor because that’s kind of not pointing.

Speaker3: Okay.

Frode Hegland: Andrew, what do you think of both? Because I like that. But also remember, this means moving things up and down.

Andrew Thompson: Yeah, you’re getting repeat gestures at that point. We could change the the swipe gesture, or we could keep the concept of dropping it, but have it be a different motion.

Adam Wern: Yeah, but if if you swipe, you don’t want the laser pointer at the same time. That’s true.

Andrew Thompson: So so like it still works. But we have to think about this with other tools as well, right? Yeah. I assume if selector was all we had we could just have it be a single gesture. But this is like we’re assuming we’re going to have several tools that you want to be able to pick between. Yeah.

Adam Wern: And if.

Andrew Thompson: You’re saying like.

Adam Wern: Is there a difference between text selection and selection in general, like picking?

Andrew Thompson: I think so the way I was picturing it, like this selector tool is essentially like our equivalent of a cursor, or like you could select stuff or you could like click links, things like that. That’s what I was picturing. But maybe that’ll become like too convoluted and you’ll want something else for that.

Adam Wern: It could be relevant in terms of ambidextrous work when you use both hands, how would that work with double laser pointers? Can you only have one? Can you do selection of objects like books and move them around with one in each hand, but only do text selection with one hand at a time? So there are kind of complicated cases here that we need to consider. It.

Andrew Thompson: Probably makes sense that each tool can only show up once, like at a time. Like if you say, I’m picturing this button, so you press the. The selector laser button. Right. And then now it’s like grayed out and you have it in that hand. And the other hand can like either like press it to and then it swaps hands or you can like select a different tool or something. Then of course both could. Have nothing, I don’t know. What’s something we’d tinker with as we get more tools?

Adam Wern: Well, especially when you’re. Yeah. Sorry. No, no.

Frode Hegland: Adam, please.

Speaker3: I’m sorry.

Adam Wern: Especially when you. I found it in my prototypes. They were ambidextrous, and you could move one, one book in each hand, hold two PDFs in one in each hand. That is kind of a selection or holding and or even at remote grabbing. I had one in each hand, and that felt very, very good to be fully ambidextrous and not not remember if you have a tool in the hand just doing by by gesture and not by kind of mode modality that you. Yeah, I think right.

Andrew Thompson: Now our equivalent is the swiping right since we can’t like, hold things. We like cut back a lot of it. So having both hands swipe would probably be good, except. You’d have the whole like if you point with both hands and try to do stuff like which one takes dominance?

Frode Hegland: Just a question to you, Andrew, before Mark and Rob, if I have multiple references as in things up, in order for me to determine which one I’m moving up and down, is that based on which one I’m pointing at with my flat palm, or is that another selection? So that’s all it is, right?

Andrew Thompson: Yeah, it just works when you’re sort of gesturing at okay. And it’s pretty decent at that. Very good. Now, of course, if you have a bunch of, like, little things all near each other, it’s going to be hard to tell which one you’re getting. But I guess you just keep swiping until they all move out of the way and then get the one you want. Yeah.

Frode Hegland: Yeah. Mark, please. And then I’m just a couple.

Mark Anderson: Of levels of feedback from trying things again. I was interesting with the pointer. It looked to me as if the the, the pointer seemed to be attached here compared to there, because when I was holding my hand like this, the line because I was still I was just surprised at the amount of sort of of, of, of sort of tremor because I didn’t have particularly shaky hands. But obviously we all have a degree of microtremor. But in that. So I wondered when, when you know, in a sense, the wrist I imagine actually is more stable because there’s still a degree of tremor in there. So I don’t know whether there’s some fine tuning allowed there, because I totally get the point. I mean, if I know that the if I know that the pointing is coming off the wrist, and as long as I know that before I start using the tool, that doesn’t bother me, especially if I understand it’s because it actually makes it more stable. And the other thing is, I was trying to work out how to interact with the control pad because. So so a problem I had there is it’s probably this distance thing, but I couldn’t work out. Basically I couldn’t work out where the where the environment thought I was trying to interact with. It is, I think, the sort of cognitive confusion. So I there was literally sitting almost here for me was a little control box, but whatever, whether I tried to literally sort of virtually touch it or try to hold my hand up here I wasn’t getting any tail back. Again, no, I don’t say that as a critique. I mean, this is all part of the discovery thing. Yeah, it’s good. And possibly because also I’m working in a, in, you know, in a, in a small band, but I just, I just offer that for what it’s worth. It’s one it’s that kind of thing that if you don’t see it, you’d wonder what on earth it was describing. Okay. Thanks.

Andrew Thompson: I can clarify a little bit for the the menu. It is just a touch. You just touch it. It’s not like a ranged thing. But because of the whole. I’ve only implementing one hand at a time right now for testing. It’s the only hand that does the touching is the one that doesn’t have the menu sphere on it. So you tap the sphere and then you can press the buttons in the menu. The other hand doesn’t do anything. It obviously will in the end, but I’ll.

Mark Anderson: Try again. I thought I’d try both hands. We’ll see anyway. Yeah.

Andrew Thompson: That’s strange. Yeah, it should just be like an actual virtual touch. And then see, you had a, you had a second comment about it attached to the what looks like the knuckle. And it is positioned on the knuckle it is attached to the wrist. Now for my testing, I believe the knuckles are almost faked. I don’t think they’re actually tracked. I basically they they move the same as the rest. So if you look at it, they’re pretty much exactly in the same spot. They just rotate. Right? So you’re not actually getting any more shake because it’s further down the hand because it’s the same anchor point. That being said, a lot of that jitter you’re seeing is not your hands. It’s just the tracking being imprecise. And that’s why we’re like, not completely sure what to do about it, because that’s very much just what the headset is giving us. So to do any smoothing, we have to add our own layer on top of it.

Mark Anderson: That’s really useful to know that it’s partly an artifact of the of the system rather than rather than the wetware, because knowing which is which, because both both are equally probable in the sense. I also just try the I just tried the control thing. I managed to get it to the point where it’s rotated to have the two sliders and a little dot above it, but using either hand, I couldn’t manage to interact with it at the moment. I don’t know, I suspect I suspect it’s partly it feels like it’s a thing of, of, of a sort of confusion almost a cognitive confusion between where the, the sort of environment thinks the The sort of thing being interacted with and the and the interacting object versus the actual experience of it. In the thing, anyway, I was I again, just, just I just mentioned it for what it’s worth. Not not that it’s causing me bother.

Andrew Thompson: Is it is perhaps like getting in your table so you can’t reach it because the table’s in the way. Is that.

Speaker3: What’s going. I’m.

Mark Anderson: I appear to be touching it. So with my tracked hand and I can, I can sort of poke my hand through it so I can reach it. I can’t quite get my hand. Well, I’ve tried pulling my hand far back to give it a distance, a thing. But I think the problem then is if the if the hand is behind because I think the headset tracks forward. So of course if you put your hand back here, you’re getting the distance. But but you’re not being tracked.

Andrew Thompson: It’s only tracking off your index finger. So it’s like it’s right there at the tip. So it just wants the tip to touch the buttons. That’s all it’s looking for. So if you’re poking all the way through you’re now technically behind the button, so it’s not going to do anything. Sure. But exactly the gesture of going through it should catch it unless you’re going super fast. So I really have no idea why that’s not working for you. So I’ll have to test it to myself and figure out what’s going on.

Speaker3: On this.

Frode Hegland: Particular issue. Adine you don’t have any problems pressing buttons in the control panels now, right on the Vision Pro, right?

Dene Grigar: Why should.

Speaker3: I? Because that’s.

Frode Hegland: What. That’s what Mark was saying. He’s got The buttons don’t seem to active. Activate. Andrew, have you also tested this on the vision yet? Have you been in the lab?

Andrew Thompson: No, I haven’t tested in the vision.

Frode Hegland: Okay, because they seem fine for me. That’s why I’m asking you guys if maybe this is a quest to reading Hand-tracking issue. That’s why I’m asking, that’s all. Well, the quest.

Dene Grigar: Two is going to be totally different, right? I mean, quest two is I just have the quest two. I was testing this with that headset and it’s a different experience now. The quest three is better. Apple Pro is even better. So I think there’s a hierarchy of bitterness in this. But the quest two is not I mean, it’s not that. Adapt to this?

Speaker3: Yeah. Yeah, that makes sense. So then the.

Andrew Thompson: Menu is too far down Mark as well. It’s based on kind of where your head is. So if you look up and tap, it’ll be like higher up for you.

Mark Anderson: Yeah. No, no, it was I think I suspect it’s a combination of what I probably need to just reinitialize things. I probably also got it to a point where it’s still running at the moment, so it off, but I put it back on because that was. The thing is, when I do this, I’m not getting the exit symbol either.

Speaker3: Oh, I’ve had problems.

Frode Hegland: With that Mark. Even on the Quest Pro to get the oh my God, it is the most annoying thing.

Mark Anderson: That’s fine if it’s just that I’ll keep doing. Which is my question about is there actually something if I get stuck in a simulation like that, is there a command I can send to the headset to sort of that, other than in the environment to basically like an escape key that you can use externally otherwise? I mean, otherwise you can just always restart the headset.

Frode Hegland: Controller button or on the version you press the crown.

Speaker3: Okay.

Mark Anderson: Ready? Yep. Understood.

Speaker3: But Rob. I’m having a lot.

Peter Wasilko: Of trouble with.

Speaker3: The with.

Peter Wasilko: The.

Speaker3: Vision. I got my Bluetooth keyboard connected, and I.

Peter Wasilko: Can end the.

Speaker3: Trackpad connected. I could move the cursor around, but I couldn’t get it to do any typing. I think maybe in notes, but I’ve got the iPad version of pages in there, and I can’t get anything to type it all. So I’m in need of an instruction manual that actually will tell me something. You’ve posted a video, I think, with a guy who was typing on a Bluetooth keyboard and it had a little bar that went around with the keyboard that I think was software generated. And I can’t find that in my magic keyboard, so. So where does that come from? So when.

Frode Hegland: When typing. So I have a really crude version of author that I’ve been experimenting with. The first thing you need to do, of course, is to select a insertion point. This is also the same nodes. So you got to look at somewhere in the document and then pinch. And once you’re pinched, you will have a normal blinking cursor and then you will have the floating VR keyboard. And if you don’t have that, no, sorry, but if you have a Bluetooth keyboard, you should be able to just type on that. But do you get that insertion point?

Speaker3: I could get the insertion point, but no typing.

Frode Hegland: Not. Not even a floating keyboard.

Speaker3: No, the floating keyboard I can use, but it’s it’s really unusable.

Frode Hegland: But it’s the Bluetooth one.

Speaker3: I have to look at each key, or else I have to stick my finger in it.

Frode Hegland: Okay, so. But the Bluetooth keyboard, have you gone to settings and paired it?

Speaker3: Yeah, I paired it. Okay.

Frode Hegland: And it doesn’t seem to activate.

Speaker3: Oh it’s not. It activates, but it doesn’t do anything.

Frode Hegland: Is that the same when you’re using for your computer? Yeah, that’s the problem.

Speaker3: That’s a problem. We have to throw it away on Monday.

Frode Hegland: On Monday, I spent the first five minutes of the meeting trying to unpair my keyboard with a vision in order to use it on the computer so I could join the meeting. So that’s why I have to buy a second Bluetooth keyboard. But it’s so gosh darn expensive. I’ve just decided to wait a bit, but if you get a normal Apple keyboard and pair it exclusively with the vision, it should be a lot smoother experience.

Speaker3: Well, that may be the answer for that. Unfortunately about selecting. About selecting. I think Adam suggested selecting by word rather than an insertion cursor, which is not that easy to see. And I think if you pointed it at a word and, and clicked on it and held it down and selected a series of text, it would be a lot easier and more intuitive.

Frode Hegland: So this election we’re dealing with now is, of course, primarily for reading rather than writing. So we have to decide on in the text that’s on screen what the selectable units should be. So for instance, if you’re looking at a reference on screen, we have to decide should you should a selection, select the whole reference line with title, author and so on. Or should we try to be more granular? So if you select something close to a name, it selects the whole name and that kind of stuff. That is absolutely going to be important for our testing, no question. I do agree with you that working even with a keyboard and and so on in a headset is sub-par when it comes to writing. Now that’s, you know, it’s surprisingly bad in my experience, too.

Speaker3: Well, the video of the guy does it. He does his writing in the headset with a Bluetooth keyboard. Yeah. And it’s and it seems to be pretty efficient. But he writes it in notes, which is not the best word processing platform.

Frode Hegland: Exactly. Once you have a keyboard, especially the Apple one that is paired with the headset, it then it also my experience it works well, but if it’s shared with a computer that’s on in the same room, then they start fighting with each other. Yeah. So, Mark, I’m just going to actually can you please read out what you put in the comments, which is a very good point.

Mark Anderson: Oh, just to say. Well, I mean, obviously not all formats formats use a nice sort of clear number, you know, like a little target. But in the case of the ACM, as it does, I don’t know if that could act as a proxy target for basically if you want to select a whole system, if we had whatever is a notional equivalent of a double click to be able to basically, if you as soon as you point at the number and you indicate, you know, you double click the number, so to speak which, which would then just select the reference part of my thinking here is, is, is going back to this point that if we’re talking about this, this bizarre thing we call academic reading of papers where effectively you’re bouncing around and actually a lot of what you’re doing is, is sort of intentionally moving between even if only in your mind’s eye when you’re using the sort of the paper equivalent, but between the references and sections of the, of the text itself. So one of one of the, one of the conceivable things you might be wanting to do at that point is, in effect, to be indicating a specific interest in a reference as an, as an information object, because you’re going to do something with it.

Speaker3: Yes, exactly.

Mark Anderson: Next, want to see everywhere occurs. Or you might want to say, no, no, I want to I now want to take that information and I want to say put it into my reference manager or whatever.

Frode Hegland: Yeah. But Mark, can you please write something on this? That’s something I’ve been thinking about and working on. And if we have different perspectives, I think that would be brilliant. Thank you. When Andrew comes back, I’m going to show something to go into the next section, but keeping this section because it’s so interlinked. But of course, Adam, please go ahead first.

Adam Wern: And so on. The reference section. I imagine that it could be a halfway selection where we just point at it at the reference without saying what exactly what we want, and that it expands. The authors get kind of expanded into other objects. The papers become become a PDF on the side. The reference has a text blob thing that if you want to keep it as text, becomes one object and and so you get larger targets to work with and that you could work from. So if you want the PDF, you just pull the PDF out out from there with your laser hands or whatever. And if you want to follow a person, a specific person, or a university or a journal or something, you go that route. And as soon as you deselect that paper, those things disappear or fade away. If we’re really fancy, but at least move out of of the focal point.

Frode Hegland: Yeah, absolutely. Absolutely. Are you back, Andrew?

Dene Grigar: It’s not quite maybe having some difficulty with internet connection.

Frode Hegland: Yeah, yeah. He said he was going to be gone for a bit and that’s of course cool. Well, not cool, but necessary. Just like I had to Just changing a color thing here for something coming up. It’s nice, though, with this kind of discussion to go through what we have and discussing how to go further, rather than just discussing in thin air. So is this really, really lovely. Also, guys on, I put a link into our FTL page. Where is it? Yeah. If you can have a look at that. That’s where I’m putting at the bottom of the page. This is Andrew’s page. Refinements for the next build. Of course. It’s heavily influenced by my perspective. I’ve written a few things ahead, so please say what you agree, don’t agree, or what should be added. And of course, we’ll discuss the whole thing.

Adam Wern: So if you can stick me as a post entertainment while Andrew is away what would be alternative ways to show the reference section here? I can imagine that we kind of group them by the. That would make reference a bit roomier. And the objects we talked about a moment ago, the like the pdf, the, the institution, the people involved in in a paper or or so a reference, they could be grouped together, but more as visual placeholders or objects that could be expanded immediately. Now we have a very long list and it’s a text list. The thing we want to have as the primary object. Of course, texts are good as labels, but when we have 100 references, it could be worthwhile to look at more graphical or expanded. Ways of visualizing them or grouping them, grouping similar objects together in some way. Absolutely.

Frode Hegland: Absolutely. And I hope everyone will write or mock up or do something on that, because I feel that Andrew is doing the infrastructure for that to be possible. And what you just said, Adam, is absolutely the research. So Brendel just wanted to show you the link I posted in here. This is where there’s a little video description and link to the actual work that Andrew has done for today, which I’m very, very excited by. And we’re going through how to polish some of the interactions and how to go further. Mark, Mark, Mark.

Mark Anderson: Yes, I think you’re the point Adam raised is because in fact, I sort of took on the discussion that had been raised by sort of brundle’s point about having some sort of the almost a gloss attached to the reference list. Some extra information led to a very long, a very good hour long conversation with Dave Millard the other day about that. In and this is in the context of sort of the academic reading, but both the academic reading or other people trying to actually interact with the complexity of what is the wrapper of an academic paper which we forget is, you know, the, the only audience of, of it’s not just an academic who’s going to read the paper, other people have to read it as well. And they’re not necessarily well served by the rather gnomic way in which a paper is assembled. So but one thought is that I could see, I could see the usefulness of having, in a sense, a number of different modes of the reference display. I say modes. This is predicated on the fact that if if the environment knows the, the, the reference list as, as information objects. So we fed it in. So we’re not just using a pictorial representation. So if I take that as said otherwise it doesn’t sort of make sense is for instance I mean you might for instance want to see if you are looking at a textual representation in context, you might actually want to see the references that are that are actually there whether they’re visibly linked or not.

Mark Anderson: And that wasn’t a deliberate sort of Ted reference, but, you know, whether you want actually a visual indicator that goes to there. The other thing is. Presuming, presuming something that we don’t do at the moment. But if the author was using a tool that allowed them to gloss their references in a way that it explained that roughly their purpose, you know, this is an introductory reference. This is a reference used in argumentation. The interesting conversation I had that I wasn’t surprised when I spoke to Dave, his immediately said, well, you know, we tried linking link types back in the day, but that was being done for, I think it was I version two where people were trying to do fixed argumentation computation across the links. That’s not what we’re doing here. What we’re actually trying to do is to give sense, to actually capture something that’s otherwise left as implicit for the more experienced reader, and that’s not helpful if we want to communicate at scale. So it’s something our tools can’t yet do, but our writing tools hopefully will. So, for instance, you know software agents could be sort of realizing that, okay, you added a reference here and maybe, maybe giving the feedback I think you’ll make this basically is part of your structural introduction or something.

Mark Anderson: And you could accept that or change or something. And so that these are the sort of things I think will come into our writing if, if we move to a to truly, sort of digitally native documents that aren’t effectively a facsimile of, of, of a writing. I’m skipping ahead a bit there, but so what that means is that if we sort of presume some of that information is there, I think that might give some ideas as to how how we can present the references. We might, for instance, want to see the references bearing in mind that current print modes mean that they have depending on the depending on the publisher, there may be ordered an author, there may be done on on occurrence. And it doesn’t really matter if we have the information, we probably rather rather want them in in the context of where they occur. And possibly the just the small bit. So if we’re only looking at, say, the introduction to the paper, we might only want to see maybe next to it. And when I say looking at the paper, not necessarily looking at a white rectangle, we’re looking at maybe a block of text which has been extracted for us and presented as well. You know, the bit we’re looking at and we might like to see alongside that the references that occur in there in that section, for instance.

Mark Anderson: So I think that’s that’s where this environment actually allows a great amount of traction. I think it’s going to allow far more for consumption of paper than it is for, for the writing, simply because I think there’s a lot of interaction as, as our earlier conversations showed before, at a point where that’s something we can do easily and rapidly. But I do think I absolutely do think that in terms of the sensemaking of complex, structured documents like an academic paper, that this sort of ability to sort of slightly deconstruct it and reassemble the parts with much, much clearer context. Could be remarkably useful and, as importantly, make it accessible to a wider audience who don’t necessarily need to be trained in the internal structure of the document to understand it. Because at the end of the day, most of this, most of these papers are being produced with public funding. So there shouldn’t be any reason to stop anyone from using it. I mean, the fact that academics are mainly writing for academics is sort of neither here nor there. I do. I do profoundly feel that the easier we can make it for more people to consume it the better it is for us all.

Speaker3: I agree.

Dene Grigar: Good. Good work, Mark. Absolutely.

Adam Wern: They could also and we discussed this before in the group, the I think the social aspect of reading in a working group or, or the post publishing annotations are also very important. So even if your ACM or a hypertext conference don’t like to have emoji stars in the reference section because that’s not their tradition, it could we could support that as overlay or or extra annotations that we could add there. So you could put stars next to resources or references that are really high class, that should be read or things are or things that are just for historical purposes, to show that this was a thinking at the time, but not the recommended way of thinking anymore. We could add that to annotations, and that will be a that would be an easier sell, I think, to have kind of sidecar metadata that we could put on documents, and that would also allow for work groups or other established scholars to do a service over documents and do recommended reading lists that we could put on the reference sections. And that would be a wonderful thing.

Dene Grigar: I think. Just let me jump into that really quickly. I just imagine that would have the same environment as like a WordPress site where you have your, your tags and you have like news update. And you can tag it with is it news, is it update, is it archival? I mean, we can come up with our own terminology, but automatically just tag that with those concepts. I’m sorry. Brandon, lead me to step. Step over you.

Speaker3: Also.

Frode Hegland: Brian, I’m going to step on you a little bit, too. Danny earlier talked about how nice it would be for her to be able to add voice notes while she’s working. That’s the kind of stuff our system should absolutely be able to store. So all the multimodal things we’ve been talking about, our references are not normal. So yeah, that would be wonderful. Agree, Randall. And then Andrew is back.

Speaker3: Yeah. So mark the, the, the frame of reference about that sort of manipulation. The decomposition I think is really relevant. And I had some incredible proof positive of the importance of it. This week, actually, as a fellow, I think I may have mentioned at work who’s been looking at transformer.

Brandel Zachernuk: He clarified transformer matched networks, which is a form of LC series LC parallel circuits, where you’re matching the resonant frequencies of the based on the harmonics of two inductors within a within a circuit. And he saw a video I did just about cognitive science and was like, does that mean we could do this data visualization in VR? And I was like, well, yes. I wasn’t expecting anybody to understand that, but but thank you. And yeah, the, the sort of the, the quality of what he’s done is, is really astonishing. It’s actually on CodePen so you can see it there with a Vision Pro. But one of the things that we were talking about yesterday was that the thing that you do to. Act on the system is only to get a good enough view to decide what the next decision, what the what the next action should be. That will then improve your view of it for the purposes of taking action. So so you know, perception over the thing is never a neutral operation. It’s always and only for the purposes of deciding what to do with it. It’s, you know, your your observe orient do act like sort of sort of loops. One of the things that, you know, static documents have have, have sort of made us think over the years is that all of those things need necessarily be interior actions over us as sort of a nominally static kind of corpus.

Brandel Zachernuk: But that’s that’s not the case that we have the ability to manipulate. And when you do, then you can come up with really different things. When you’re involved in the process of processing, then you can you can make better decisions about what it is you want to do to the data to see the stuff that’s new. So I, I’m really, really excited about being able to render documents manipulable. The other thing about it is that, like at some level, technically it would have been possible for somebody to do this in a 2D panel with a mouse. Having the ability to manipulate these six degree of there’s 3 or 6 degree of freedoms widgets for being able to do it. And in fact, you know, a lot of people will have a tendency to want to want to backfill whatever has been done in, in VR to, to be able to, to run there. But the reality is that when those things don’t happen at a reflexive speed, when you don’t get the feedback that this is, that these things correspond to those, and these are the ways that those relationships are amplified or muted as a consequence of prioritizing things.

Brandel Zachernuk: Things it just doesn’t ring in the same way. The kinds of things that you get from these ready to hand kind of operations and stimuli are something that is an absolute step change in terms of what it is that you have the ability down in your reptile brain to understand. So I was honestly a little haunted by the progress that he’s made with this, this boilerplate. And and I’m incredibly excited by the the ability to intervene on a corpus of stuff for real view specs. The way that Doug, I think, really wanted the other thing that, as you were saying about annotation and things it also made me realize is that, you know, this Academic tendency to to prioritize original work over annotations or over critiques or. But but but nevertheless the sort of this this understood necessity to present even a critique as an original work is something that I think has is is something that we could, you know, modulo, you know, inconvenient things like intellectual property rights and publishing rights and things like that. Yes.

Frode Hegland: I think we need to move. All of that is relevant and important, but I think we can move most of that to Monday meetings.

Brandel Zachernuk: I’d like to finish this, please. You having the ability to publish something that exists as an annotation over an additional work, and I think that that that manipulation and that the presentation of those additional manipulations specifically over that work is an essential component of this as well. So, yeah.

Frode Hegland: Oh, I think that is super important. I think that is design work. And I think maybe next Wednesday we schedule that as a topic for design for the second half because that covers that’s under metadata, of course. In that, you know what, however we choose to divide the words and so on. But. Yeah, that’s becoming more and more present and relevant. So we’re now going to move a little bit into the design discussion. We don’t have that much time left. But please note that this design discussion is completely related to what Andrew just showed. I just have a little thing that I would like to present. I hope Deanie will be back soon because I miss her. Let me just Share my desktop. If. Hang on your the speech bubble for zoom was in the way.

Speaker3: Right.

Frode Hegland: So the first thing here, this is an old slide. I just wanted to put that in here because we’ve been talking a lot about targets. And this is an example where the user has pointed to a number 11. So that is made special big. And also the one of the author names are selected. So that’s made more special or and bigger. So depending on what we find needs for in the future, we can decide on what is reasonable to have in terms of resolution for what we select. Any comments on that, especially from techie guys. It’s something to check over time, right? How? What? And in computer games, when we talk about a target to select, you know, it can be a box around a visual image. It can be small, it can be big. It doesn’t have to be the same thing. Similarly with the iPhone keyboard. When that was first developed, the letters did not. They looked the same, but they had very different targets. So we need to decide on here what the target should be. Maybe we only allow them to select names as one thing. Right. Okay. So what I’d like to, to really look at is not looking back, but this stuff here. Oh, Dana. You’re back. Great. This is quite brief. Now this is just a basically notes for me, for the future. You don’t need to look at it too much. But here is the point. It’s fun to do the kind of extract something that here we have abstract. You tap on that and you get more abstract. But what I want to show you is a very brief thing. And you can see at top right hand corner, can you all see key authors?

Speaker3: So yes, it’s very small.

Frode Hegland: Yeah, it is supposed to be small to begin with. It is quite literally supposed to be in the background. And what I mean by in the background is something for us to discuss if we choose to go this way. But I also wanted to show the list here on the left of references. It has a title, and that title can be interactable. So what the user does now is tap on key authors or look and pinch or however you want to see it. So what we have presented here is a list of what the user thus has decided are key authors in our field. So they’re all presented italic. Then you can see the names of the authors, right. Does that render okay on your screen?

Dene Grigar: Okay, I can see those. Thank you.

Frode Hegland: Good. Now I just want to because it’s a bit tiny. Bit bitty. So at the bottom of that list we have a very simple control interaction mechanisms. We have show connection show only and edit list. This is obviously not any kind of final thing. It’s just meant to show that when the list is open, you can decide what the list will be. Right. That’s all. So now we’re going to tap on Show connections. And we get the most obvious thing we’ve talked about for a while. So the point here is you can have many of these. This thing called key authors is just a list kind of of criteria. Imagine you have a list, another list saying authors whose work I don’t trust. You can have that open at the same time if you want, and if you have a lot of lines going to that to the reference you’re reading, you might feel that this document just isn’t worth reading. So this this key authors thing and I’m going to unshow connections. And I’m going to tap on key authors again. And now it collapses and it has hard brackets. Currently the design style, all it is. If something has hard brackets, it means that this is a list of some sort that you can expand. And just to be obvious, last slide at the bottom of the reference list, it has options to including how it should be shown. Currently it’s showing by occurrence. It might make more sense to show by alphabetical listing if you’re going to align it with key authors. So it’s less messy. The lines going across. Now this is just a rough mock up. These preferences here might very well live on the prism that Andrew has developed. And so that’s all I wanted to talk to you guys about the notion of having these kinds of lists in space that you can use as criteria to analyze your work over.

Adam Wern: So what is the main use cases that you have in mind where why do you want to see this? Is it to find kind of that they are citing the right people or what would be? Because it’s not obviously clear to me to why you want those selected.

Frode Hegland: So I’ve written a little bit on the references as hypertext nodes, which is what we’ve been talking about for a while. And there are obvious things that can be listed. There are because these are citations. It may include other stuff. Furthermore, there can be specific user ones, including what Denise just said, that I just added written notes. So then we come down here to to this paragraph. This is what this is about. The user might have their own specific library sets slash criteria to match, such as documents considered, canon documents later retracted, documents considered high value, and so on. So the the use case of this is that these are all analysis sets and they could be lots of different things. So the basic use case would be a teacher opening up a student document. And already in their place they have a few of these things hanging about. So if they say a lot of references to somebody you consider to be bullshit, that’s a warning sign. You instantly see it. Maybe one of them is just a list of what you, the user, have decided is canon for the field. You see other cover that close it, turn it, you know, move it away etc.. Does that help address? Okay. Mark.

Adam Wern: Muted.

Mark Anderson: My apologies. Yes. I mean, my my, I just want to start a different way. And I was thinking back to the discussion. We the sort of previous discussion we were having in that it might be that for instance, I mean, in the example shown, there happened to be a sorted list and that that’s fine because that’s what the demo was showing. But for instance, if you did your thing saying, well, show me the show me the, the, the authors I’m interested in it might only show the authors you’re interested in what? So because bearing in mind and it’s unfair in a sense, because it just so happened the list being used was, you know, numbers sorted. And I think that’s on author’s last name. It’s just, you know, but so in a sense, it’s an arbitrary sort. So one so one of the things you basically the sort of questions floating around and I think this goes back to the point Brando’s making is, well, you don’t really know until the moment you’re interacting. You might want it sorted, you might want it, you might want to see only the only the papers of the type that you’re interested in. You might want to see them in the context of a particular sort.

Mark Anderson: You might want to see where they occur in the run of the text. So I suppose I was going through my mind is thinking, so, so what do I need to know in order to be able to do that? Because it is a sort of metadata type question. And I know that’s something a lot of the documents don’t necessarily have at the moment. I mean, we do have at least with an HTML rendition at the moment. We, we at least know where the anchors sit within the flow of text so we can do some things. And I don’t and also I what I’m saying I don’t see is antipathy or to, to the sort of display that Freud’s just showing because, for instance, if you’re doing that in, in a document, say you’re doing that in author, that would be a very good way to do that in within that sort of 2D display spec. Thinking of it in XR, I think, I think it makes sense to be more dynamic if, if we have the metadata because there is an if there. So I think it’s one thing we can explore. We also have to be mindful of what we can do with more limited input resources, if that helps.

Frode Hegland: Yeah. Two replies to that. Mark number one we make the assumption we have the metadata because the metadata is part of our Sloane thing, so we fake it until they make it in a sense. Secondly agree to every single thing you said. Those are the kind of things we should interact with. So if you want to write down the kind of commands you want in whatever form, I would love to mock that up. What I showed was like three little options, very, very anemic compared to what should be done. But to be able to have. If I understood you right, one of them in the list is show me only the reference that have these criteria rather than just lines. Yes, absolutely. That’s the kind of thing we should try, for example. Yeah.

Mark Anderson: And and I want to I want to stress that I wasn’t trying to critique the display because I in the way it was presented. There are limitations in a sense, what you can do without a vast amount of effort. I’m thinking in the plastic, in the plastic space that’s represented by our XR environment. If all effectively, if every item in the in the reference list is is an addressable object. Then potentially we can sort of do what we like with it. And the question then becomes, well, what would we want to do in the moment with it? And I’m still processing sort of Brandon’s observation earlier just about this fact that, yeah, the intentionality changes constantly and I haven’t I need to think on that some more. I mean, yeah, blindingly, blindingly obvious after the fact. In a sense, yes. So you just because it’s intuitive, we just do it. But abstracting that thinking okay, so how do I, how do I how do we how do we offer people the affordances to, to for them to map their, their, their momentary changes of, of intent and present them with a relative affordances? So we, we have we have a plastic environment in that we can broadly make what we want in it. We have, we presume in the future the information with sufficient granularity and visible or extras or metadata behind it to support the, the, the changes we want to make. So that’s a fun question.

Mark Anderson: So what are the tools? I mean, I say that because I don’t know. I mean, I can sort of imagine them. But the trouble is when you imagine them, you tend to elide all the difficult bits that you know. It turns out that whether something you think of as a tool isn’t really a tool, it’s actually a massively complicated transform that you’ve just magically imagined. So I’ll noodle on that a bit more. But, but I think I think what it says to me is that We’re probably looking to beat that, to have a really much more dynamic relationship between the citations when they’re there. And the parts of the document. And this something came out of the conversation I had with Dave the other day. He said, you know, it occurs to me what we’re doing is we’re actually we’re building a new generation of reference managers, which is kind of weird. So in other words, in fact, when you’re looking at the document, the document is almost part of the gloss on top of the references. It’s another way of looking at it. It’s a bit it’s a bit. It’s one abstraction. But I’m looking at all these angles at the moment. Simply because I think it helps to understand what we, what we, we need to engineer into the documents going forward. And also just looking at the floor I noted I’ll take away the idea about trying to put additional annotations or thoughts onto the top of the paper.

Mark Anderson: I need to talk to Gabo of the Minter team again. One of the other projects that’s rumbling along using the same data set, which is why it’s relevant. And it’s not something we necessarily have to do immediately in Excel, but will be useful again going forward to us is just that, well, we’re beginning to put some information there. The the documents have essentially been put into addressable blocks of metadata, and they can be transcluded through. So there’s there’s an interesting thing, I don’t think at the moment, we’ve really thought so much as we have here about doing doing funky stuff with the referencing references as objects, but that’s something I’m definitely going to take take to that lot next and say, okay, right. Well, if we’re able to comment and gloss across the top of this and, and, and and effectively perhaps be then able to transclude information underneath it. So again, for us, if I’m the person with doing the annotation that’s the center of my observational sphere, so to speak. But then I can transclude in the information from the underlying or the associated documents that are relevant to what I’m trying to explain. And again, the interesting thing to me about this is that it completely it completely breaks out of our notion of our existing sort of 19th century notion of what is a document, you know, something that lines on it.

Frode Hegland: Thank you, Mark, and please try to write down and please try also some of this for Monday. We only have 20 minutes left on talking about design today. Sure. So Danny, I believe you had you have you had had your hand up. Excuse me.

Dene Grigar: Yeah. So I want to go back to what you were showing with the Hyperlinking from the various lists. Right. And so that action we’ve talked about before, and I guess my next question is, once we do that, once we’re able to make those links, then then what what what is the next step. And so there’s several directions we can go with that mark. You can chime in here too as well. But I’m imagining it’s not just showing the connections. Right. And we do this with this is the one thing that drives me crazy about visualizations. People will send me a visual visualization of something and go, look, I just did this. And my question is, so what does that tell me? What? What is it that I can do with this data? So the so the big question is once we it’s not that we can visualize the hyperlinks like what questions are we asking to build the hyperlinking structure? And then what do we do with that structure once it’s done. So for example research question you know what authors are talking about. I and and VR simultaneously. Right. And here’s a list of authors and we can draw those links. So it’s driven by the research question. Then once those links are produced then we say, now step two pull all those out into a separate document. So that we can develop the annotated bibliography, or we can rebuild a new reference list. So I think we want to think about this in three steps. Research question visualization. And then the output of that. Mark. Do you agree with me.

Mark Anderson: Yeah I think that’s very interesting. And it becomes so it’s. It’s an endpoint to that is that you might be started out by reading a different document. Spotting a reference in there that you now have an insight belongs into effectively another one, a different set in your overall knowledge sphere. And so you’re going to build the link and and you may you may go off to that, that, that sort of next design space. You may stay where you are. But the point is you’re capturing it in the place in the time.

Dene Grigar: Yeah, I know what I.

Mark Anderson: Was insight occurs because otherwise it ends up, you know, it ends up basically written on a piece of paper, you know, by you and goes there to die because you forgot, because the phone rang or something.

Dene Grigar: So let me just respond to this photo really fast when I’m working with graduate students and they’re looking for a research topic for a paper, seminar paper, conference paper, or the dissertation topic. You know, they’re looking they’re they’re reading, they’re doing lots of reading. And they don’t know quite what they’re going to write about. Right? And they come across something in their in their list of readings. It’s like, oh, that’s a kernel of an idea. And then they go down to the rabbit hole, we call it John, and I call it the Rabbit Hole. And they start digging through that rabbit hole and coming across a lot of other things. But then they get to another article that takes them out of the rabbit hole into another another topic, another rabbit hole. So the documentation of that process is is the interesting part. It’s a it’s a journey, the, the knowledge journey. And if we can capture that in the process of in the headset, that would be very valuable. Even if you don’t have a research question, you’re trying to find your research question. Right. Mark, you know I need a research question. What is my research question? And you’re going through and reading and you’re documenting that journey to come up with finally. I’ve narrowed down my topic. Now I’m going to look specifically at.

Mark Anderson: And interestingly, what you’ve just described is it’s almost again, it goes back to actually Dave’s comment about early hypertext when they were trying to say, well, in effect, the article is just the glue between the links. So in fact, the reading you’re doing is, is the the reflection that causes you to start to start either traversing effectively explicit links because, you know, there’s a reference to this thing I follow or it might, it might be that, that it causes you to, in a sense, go to another thing that may not necessarily be visible before you, but it creates effectively, it instantiates a new link that isn’t there at the moment. You then wish to make. Yeah, it’s really interesting. It’s also interesting just how far this quite unintentionally, but I think correctly for this exploration space takes us from the traditional sense of reading.

Speaker10: Yeah.

Frode Hegland: So absolutely Deeney is the first thing to say. The notion that I thought of was the notion of rooms. So what you might do is set up a in a room being just a layout. So let’s say that you have several of these types of things up with several different criteria when you open a document. Not necessarily. Do you have visual lines connecting, but it may be, as Mark said, only specific references will be shown. And so the idea is that you save these rooms. You’re calling them research questions, which I think is more clever than calling them rooms where you know, let’s say you have one that is called the BS alert. Maybe that’s the first thing you open where you have retracted documents, bad authors, whatever there. And if nothing really shows up, fine, you swipe, so to speak, into a room where you see a timeline view and then you go into another one. So if you guys want to write specific things, we would do to us, you know, to further answer Adam’s question and refined by Houdini says it is in reality that would be very, very useful because as Andrew keeps building the infrastructure for this, I’m not saying we have to do this particular way, but we do need particular spaces where we can have particular thinking. Now, anything else on that? Because other than that, I think we need to go back to Andrew’s page to agree on what we’re going to ask him to continue on.

Dene Grigar: He does have some fine tuning on the. To get some of the bugs out. I think some bug, some bug work would be very important to do.

Frode Hegland: Yeah. Can you go to the page and then hang on, let me put the link in. Because I’ve written some things at the bottom. I want to make sure I am not just writing because I think it’s fun. So the first thing is, have we decided what? Andrew, you are here, right?

Andrew Thompson: Yes, yes. I’m here.

Speaker3: Excellent.

Frode Hegland: Yeah. So if we all look at this page because I think we need to be a bit more organized about the to do list for you. The first item make possible to turn on and off text selection tool. Do we agree with trying this method? And if we do, then Andrew should feel completely free to change it a lot depending on your personal testing, obviously.

Andrew Thompson: Did we decide on how you want to turn it on and off? If that was alternate wrist tap, if that was a button if that was pointing that’s. Did you did you have something?

Speaker3: Yes.

Frode Hegland: That’s what I wrote on the page. If you can reload that page, scroll to the bottom. It’s called under the section refinements for Next Build.

Adam Wern: Where is that? Where are we in base camp now?

Frode Hegland: No, the link that I put in the chat.

Adam Wern: Okay. You’re still using the chat?

Frode Hegland: I’m using the chat for this because what I think would make sense is every time Andrew gives us an update and he gives us this brilliant writing which we put on our websites, I think it would make sense at the bottom of that page in our Wednesday meetings to write what should be done for next time. So they kind of chain together, know what I mean?

Adam Wern: Yep. I’m just questioning the zoom chat versus having slack for all those documents. No.

Frode Hegland: No, no, it’s just a link to the document we always use. This is on the Future Text lab where we post all our things.

Adam Wern: Okay.

Frode Hegland: Peter, when it comes to this kind of gestural stuff you really need to get a headset because there are so many system controls that are already taken. And also there’s the problem of if you do a gesture just while, you know, moving your hands about it can actually interpret many of them automatically. So the freedom that we thought we had to maybe, you know, pointing and stuff we don’t actually had, it turns out that maybe you put your hand on your lap and you know, if your index finger is a bit further forward, the system thinks you’re pointing. So it is a much bigger research issue than we realized earlier. Yeah. Andrew, have you got the bullet point list?

Andrew Thompson: Yeah, yeah, I see it. I didn’t want to interrupt anybody. So there’s a bit of confusion that I have with it. Specifically the explanation for how this laser pointer works. So I’m going to read through it and explain how I’m understanding you. And you either like, correct me or be like, you’re right on. Okay, okay. So there’s there’s no tool anymore, nothing visual. You point and a laser just kind of comes out of your hand. Okay.

Frode Hegland: No, that’s a very glad you’ve said that. I feel and please tell me if you guys agree that the act of pointing shouldn’t be a tool in the sense of picking up a tool. But if we need to illustrate it by having a tool on the hand, we may have to do that. So that’s a huge contradiction I apologize, what do you guys feel?

Dene Grigar: Just be natural. Let me fares. It’s embedded.

Frode Hegland: How about having a blue fingertip? Right? Not because blue is often a selection thing rather than yellow, which is a highlight. And instead of having it whole device, maybe it just means your finger now has a different task, I don’t know. I saw your face there, Danny. I’m not so sure you fully agree with this particular interaction.

Dene Grigar: I’m just thinking about color. Color coding. I mean, the what you’re what you’re. I’m thinking about. I’m thinking. I’m thinking. Right? We’re asking our fingers to be color coded, right? Yellow being highlight, which is pretty standard for different functionality. Andrew I mean, Adam, I see your thinking too. Yeah.

Andrew Thompson: I mean, switching it to blue is easy for testing. It doesn’t hurt to try it, right? I.

Dene Grigar: Would say green would be the color because that’s that’s like go.

Frode Hegland: Go green. So yeah. You know, some people highlight with green, yellow, blue and everything, so that’s a mess. Brando, please solve this problem for us.

Speaker3: Okay?

Brandel Zachernuk: I don’t know if everybody seen my text editor, but one of the things that I did there is based on the sort of conceptual pose, distance to a pointing. I had a line extend from the, the distance from the extremity of the index finger pointing in the direction out from it. It was gray until it crossed the threshold for selection, and then it exhibited a color. The thing that I like about that is that it was both representative, that there is a thing there, but it also was stateful insofar as it showed when it crossed that activation threshold. And it also got out of the way. I don’t, you know, when you weren’t anywhere near that pose. So the benefit of it is that you don’t have to pick anything up. There’s no modality that requires any external action, rather than sort of readying oneself by moving into that pose. And so I think it’s a, it’s in broad strokes, a pretty interesting sort of mechanism for being able to identify and, and indicate those things, because it also means that people can kind of be rewarded by functionally babbling, you know, just moving their hands around and seeing which sort of things are encouraged, you know, to the point about, about, you know momentary perception and action groups being able to give somebody the ability to understand that they can move towards something. And for that to become some kind of meaningful action, I think is really interesting.

Adam Wern: And also with sound here, I find it very pleasing to have small, small sounds. So it’s always having a sound texture to objects that we’re touching them at a distance.

Dene Grigar: We did that with Rob’s project. Andrew, the team that built that, the VR project, it was something very small, very light. It wasn’t a constant noise, but you just knew that interaction was occurring. Right. So that that is it’s more of a kind of an ambient experience. Just like when you pick up a piece of paper, you’re going to hear something, right? It’s real world.

Frode Hegland: That’s. Yeah. Sound. Yes, 100%. So. Brendel.

Frode Hegland: Can you tell me? Because you said a lot. Just so I understand. Can you tell me exactly how you know? Because currently. And what Andrew’s done, you do palm up and down to move the text up and down, palm sideways to move it sideways. How do you go in and out of selection mode the way you were talking? Just clarify that, please.

Brandel Zachernuk: So I and I have spoken about this in the past. I don’t know whether it’s something that you’ve taken taken into your implementation at this point under, but because we have a bunch of fingers and we have the ability to track the, the sort of the pose of them. What I was doing in the leap text editor is I took the the curled ness of all of the fingers and said, I want these to be totally curled, this to be completely not curled. And then I didn’t care about what was happening. Or maybe I wanted, I don’t know, but but by being able to kind of encode rather than having a binary classifier, am I pointing? Am I not hotdog? Not hotdog? I was saying, what is the pose distance from whatever the hand happens to be doing to that? And in that context, it’s possible to have goals and appropriate threshold distances for all of the poses that are encodable in that kind of schema. So I can say I want to do one thing where I’m using two fingers like that. I want to do something where I have my thumb and forefinger touching and curled, but I want to have these things planar, and I can have a distinction there. You know, functionally, you know, it has its limits in terms of what are the maximum, what is the sort of the pose space, the separation between those. But it gives you a really interesting kind of state space, like I said, to to be able to identify where what things are reachable and what distance are given user’s hand is from them and at a given point.

Frode Hegland: Okay, so then I have a question. Is it possible? Feasible all of that where, you know, you do this to go up and down, but if you change to a pointing gesture, you now have a laser and, you know, maybe even relax your hand a bit, but it goes to something is selected on whatever if you want to activate it. Can we even do this? Actually tap it at a distance. Is the recognition good enough that that’s an actual activation?

Andrew Thompson: And it’s definitely not good enough to consistently do that. But like the concepts there, like we could we get a lot of false positives, but it would technically work.

Frode Hegland: Okay. Right. So. Okay. So, okay, why don’t we all just experiment with our hands and see what we happen? So we just scroll, we got to the right bit and now we’re pointing. So that’s the one we want. I mean this is a bit weird. Pinch, pinch. Because while the laser is coming from the hand not the fingers. Correct. But even just lasering and then going from pointing, you know, suddenly it’s moved a bit.

Speaker3: All right. Yeah.

Brandel Zachernuk: So I mean, so what I was describing is something that’s temporally invariant in that it doesn’t depend on history. And so it’s something that you can compute in a moment without any memory. One of the things that you could do if you started to If you started to involve the past. And is a keep a running tally of what is likely to have been the sort of the target for whatever the action is over the last, you know second or so, a good long time like that. And then if you have an action, then you would, you would basically say, whatever is the the thing that I wanted to act on for whatever time it was sort of unclear what the what the, what the action was. As long as we know that the, the target was that, then we can say, do the thing that, you know, I want to do now to the thing that it looked like I wanted to do stuff to before. And that, that, that resolves a lot of those sort of momentary stability issues at the cost of people being able to kind of suddenly change their minds. But, you know, those are those are thresholds that can be kind of played with.

Frode Hegland: So does that mean then if we do the pointing and we let’s say we leave it for a second, the visualization now says it’s selected loosely. So it’s kind of stuck because we pointed long it up. Then we do an action. If we want to point to something else, all we need to do is move. And then, you know, after another second it moves like a little bit of stickiness. Is that what you mean when you’re talking about the temporal nature?

Speaker10: Yeah.

Brandel Zachernuk: Yeah. It wouldn’t it wouldn’t need to be as anything as significant as a second. You know, ideally. And it’s very likely that it could be implemented more or less opaquely to the user in the sense that they wouldn’t have an awareness that that is some of the robustness of the determination that’s going on behind the scenes. Because the thing is that you would also you could also have the certainty at a confidence level effectively of that of is this versus that. And, you know, I’ve seen a number of user interface presentations, academic and commercial over the years where they they do this kind of probabilistic resolution of like, given that we have some inherent noise within our system, and we believe that this is the ground truth of the actions through it, how sensible is that as a stream of actions for achieving a certain output? And then you sort of basically propagate backwards to go like well, that meant that they probably press E instead of F, you know, and stuff like that. So it’s a, it’s a, it’s a pretty apt way of doing things. It requires a little bit more robustness, but it effectively becomes like opaque to the user that they’re not aware that that’s a mechanism by which these things are being dealt with, but it just requires a little bit more robustness and a degree of remove from what is the input versus what is the action you want to have. But not much. It’s not it’s not huge.

Speaker3: All right. So that that makes sense.

Frode Hegland: It. Yeah. Too much information in a good sense. In closing, because we’re over time and we’re going to try to be robust when it comes to timing as well. Randall, I have a meeting with a friend on Friday. He’s the deputy editor of The economist. So therefore I want to impress him. Obviously, he’s going to be putting the headset on for the first time, and he’ll be looking at this. Now, the notes that I’ve taken Randall based on what you said. And Andrew stopped me and kicked me under the table, if you feel like it. But what I basically said is point to turn laser selection on a distance on and not not as a tool, but in the way Brandon was talking, I believe. And then Oh what happened? Did Deeney go to Deeney dropped.

Speaker10: Yeah yeah.

Frode Hegland: Uni. That’s fine obviously. So now we’re in selection mode not moving mode. And then Andrew will experiment with this or this or whatever based on your feedback for how the system will know what you really are pointing at. Is that right? Will you help him with that, or am I on a completely wild chase of a bird?

Andrew Thompson: I mean, that that predictive stuff is probably like a later thing, if I had to guess because you want stuff show by Friday. So probably not getting that implemented by then. But I can I can change the gesture just fine. So my thoughts are what if because you still need to be able to select right, which breaks the pointing. So what if for now, since no one has a solution yet for like the actual action button? What if the trigger isn’t the finger extended? The trigger is these three fingers. Curled all the way. So this turns on the pointing. But you can also do this and it doesn’t break the pointing effect. So you can point like this and then squeeze to activate.

Frode Hegland: I’m detecting marks vigorous nodding.

Speaker10: Okay.

Andrew Thompson: I’ll aim for that. Then I’ll try to get that working by Friday. Obviously the highlighting is going to take a lot more work, so I’ll just like temporarily, maybe I’ll tinker with it, but maybe remove the highlighting and have it just like trigger some kind of like visual thing when you like, make a selection or like instead of like selecting text. It’s just like triggering an event at that point.

Frode Hegland: Yeah, yeah, I know, that’s that’s freaking amazing. Please reload the page if you want to and tell me if you agree with the issues I’ve put up. I also have an issue. I see you, Peter. Just one minute. Under control panel, decide what controls to have. Where? That’s for all of us. That’s the design issue. The the prism has three sides, and we can start experimenting and thinking about what we want to have on those sides. So that’s cool. Underhand tracking. I just have Polish interaction motions. This is what was discussed earlier, and Andrew’s already working on anything else I should write or edit or remove on this list?

Andrew Thompson: I don’t see those things you’re talking about. I don’t think it’s gone live yet. I have to go. I’ve got it’s at the bottom of the page.

Frode Hegland: It’s at the bottom of the FTL page.

Adam Wern: But.

Andrew Thompson: I see the refinement, but not the hand stuff yet.

Adam Wern: Oh, in general.

Speaker10: Okay.

Frode Hegland: All right.

Adam Wern: Just a super quick. Is this where we’re going to put the things in the in the blog or where are we having kind of feature discussion?

Frode Hegland: This is where we put what Andrew has done and what we suggest he should do. Feature discussion and everything else is perfectly fine to have in slack. The point of this page is that if we’re talking either to someone new to the team, to each other, or someone entirely new, we just point them to this list because this list will keep ever growing. I’m just going to show you a link what I mean. So if you look at the page I just put in here. You’ll see under reference block interaction. This is what we’re working on now. Some of this has gone by the wayside, so I’m going to remove some of this stuff.

Speaker10: You know.

Adam Wern: I will. It’s fine.

Frode Hegland: Right. Yeah. So reference block interaction is is the thing we have reference block interaction two. That is what we just looked at. Peter.

Peter Wasilko: Yes, I was wondering. Maybe Brenda would know. Is it possible to have a region in which you could prevent defaults in visionOS? So could I have like a translucent glowing blue sphere? And if I stuck my hand inside the blue sphere, produce what would otherwise be controlled gestures and have the system ignore them and just send them back as a hand position without applying anything that visionOS would otherwise be doing. Then pull my hand back out of that region and have everything work with all the normal pre-defined gestures.

Speaker3: In what?

Brandel Zachernuk: So first of all, visionOS doesn’t have any gestures. Really? It’s eyes and hands primarily. You use your. You use your eyes to point at things. This is in the native operating system. You look at stuff and then you tap your fingers together. You undertake a drag, which can be interpreted as a two dimensional scroll. If you’re operating on a on a logically appropriate object in webXR, there are no gestures at all. You know, you’re entirely at the mercy of the author. You know, in this case, Andrew and and Adam. So yes. If the author’s so Dane it to be then then it’s entirely possible to set up a system like that.

Speaker10: Yeah.

Brandel Zachernuk: It’s likely that that that you know, the repertoire of of understood necessity, necessary actions may expand in the future, just as the sort of the level of sophistication of the gestures within iOS has, you know, ballooned over the nearly two decades that it’s been around. But but at this point.

Peter Wasilko: Yeah, I see, because I thought that there were certain gestures that would like bring up system menus and things if they were made. And I was just wondering whether it was possible to prevent those defaults when you’re in webXR. But then if that’s not the case, then webXR we really don’t have anything to worry about.

Adam Wern: Yeah, well for that. Yeah. For quest, that is a slight consideration to because there are kind of upward special gestures that are will trigger system menus or exit menus from the webXR, like upside down things. And you can’t really prevent them, as I’ve understood that they are global and always there. There are they are somewhat hard to trigger, but I trigger them now and then in my while gesturing. So you can’t prevent that.

Brandel Zachernuk: Yeah. So so actually that there is one and it’s a gesture but it’s a gesture again of eyes and hands, which is if you look up and then you pinch right now in visionOS then you go into the control center and it’s to do with where your eyes are. And so if you need to be looking at something that is above you in order to avoid it, you can so that the height of that thing where it exists is configurable. But in order to avoid doing it, you have to point your head up so that you’re not looking up. And so it’s funny and annoying, but it precisely the problem that you’re identifying as, as something to think about. Peter. But but but the distinction right now in visionOS is that it’s eyes and hands rather than merely hands. And, you know, in general. So no, it’s not possible for people to leverage eyes at all within webXR, because that’s not a safe thing for advertisers to ever to get access to.

Frode Hegland: A related issue is that in meta in order to get out, you have to do this. Weird. Thing that for me hardly ever works. And then to take a screen, it’s just ridiculous. And that’s hard coded wherever you are and whatever app is not the same thing on the vision and the vision. If you want to get out of VR, you just tap the digital crown. Sometimes having a physical button is really wonderful.

Adam Wern: Yeah, it’s so sad that they that they didn’t have more gestures to the headset because to me, it would have made so much sense to have either a physical button or some sort of haptic or touch thing gyro thing that you could tap the headset to get out instead of instead instead of having those gestures in there because they are not that good. Yeah, yeah. And envision they have.

Frode Hegland: No, no, I know I’m just showing something else. So this is the one I just mentioned. But what confused me and Brandel, I only recently found out this button on the left is to take pictures out in the world, right? The control center to record is to take to record what you see in the fake world. Right? And just. I didn’t understand the difference. But of course this is on the device. So it shoots the world. This is in the device so shoots in the world. So it made it made a huge, huge difference. But the digital are you.

Adam Wern: Calling it fake now the virtual the wonderful virtual world we’re building. Is it called fake reality I don’t know.

Frode Hegland: No no no I’m talking about this one as being fake. The vision one is the real reality. Hello.

Adam Wern: Okay, okay. All the way around.

Speaker10: Yes.

Frode Hegland: Okay. So Andrew has gone because he has responsibilities. I think we agree on what he should be working on right now anyway, which is very good. Yeah. Just as an aside, I just submitted Liquid to Apple to for an update because they wanted me to change something. The procedure has been so annoying and they go back complaining about how to quit the app. It’s ten years old. Every few years I have to go through and explain the same things for them and it gets approved. It’s a pain, but it works. Sorry, it just popped up and it’s annoying. Thank you for today. This is amazing. And we need to remember these meetings are purely to design what we’re building now with Andrew, slash, Adam or anyone else. Anybody building something we talk about today. Anybody dreaming? We talk on Monday. Right?

Speaker10: Yeah.

Frode Hegland: Yeah, basically. Fridays are different now. Andrew will be using that time plus other time to code if Adam or Brandel or whoever has time to come in and help him, that’s great. But it’s more casual moving forward. So you know, when he needs help. He also put up his hand. I will tend to pop in on Fridays just to see if there’s anything to clarify, but it’s by no means office hours, so to speak. And Brundle in the beginning today, we just talked about the fact that we’re going to start inviting people to the book and the symposium soon. Danny and I.

Speaker10: Click the letter.

Brandel Zachernuk: I’m invited to a to a salon that Andy Matuschek great copywriter doing. And I just noticed on the signal channel that elio’s coming, so that’ll be fun. On Saturday about spatial spatial interfaces, spatial computing. So it’ll be really interesting to see what everybody what everybody comes with.

Speaker10: Yeah.

Frode Hegland: That’s brilliant. And please feel free on behalf of the community to invite, invite them into our community.

Speaker10: Definitely.

Frode Hegland: It’s always a good thing. Yeah. I think we’re beginning to make some progress because we have more things, more specific things to argue about.

Speaker10: Yes I did. Yeah.

Frode Hegland: And, Mark, I know you’re super busy, but some of your academic points of view earlier about this, you know what to view and different environments. If you write something, even if it turns out to be rubbish, because that’s because that’s what I’ve been doing a million times and you know, it collapses. At least we’ve learned something.

Mark Anderson: So I was thinking, I’ll put those in slack and I’ll try and strand them so that if there’s a sort of, there’ll probably be 2 or 3 things, and I’ll do them as 2 or 3 posts. So then people can reply, we can have some sort of stranded discussion if that’s required. Does that make sense?

Frode Hegland: It does make sense. But I would also like to see more use of author and reader, partly because it’s mine and I like it. Also because part of our Sloan grant is a visual meta, so at least we can test the realities in our own community, because one of the things we want to do, of course, is greatly increase the payload for references, not only in visual meta, please. Obviously, if we put it in a data resource fork, that’s fine too. But so for instance, for me to comment on what you’re writing and really build up the, the system for this because maybe, as we briefly mentioned next week, Randall, we talk a little bit more on. The data flow or the data structure for this whole thing. You know, what do we mean by an annotation? What do we mean by notes? What how is the collection of stuff saved? You know, we talked about it for for two years now in random ways. Now it’s very specific thing right.

Brandel Zachernuk: Yeah, it’s funny, I have thoughts about most of those things, but they’re at this point now very old. I haven’t been in a place where it’s been relevant to bring them up with anybody else for a long time, so I’ll have to dig through whatever records I have to to work out what I think about them.

Mark Anderson: I’m praying for what it’s worth. One things that sort of come up actually sort of the recognition fell into place just the other day is the and it rather came up today when we’re talking about the fact that our intentionality shifts all the time, like the conversation I was having with Dave about link types and the reason people gave up is it was all very hard edged. And so one of the problems with making definitions is we’re we’re actually working against ourselves unintentionally because we want we want the classification so that we understand what things are and we can make the right things appear in the right place. Our human behavior seems to seems to work massively against that, because we have such plastic notions inside here that we happily transition across these to the extent that if you, if you, if you present me the thing with the wrong label on there’s that you always have to stop and sort of open the door to get through to the other side, because that shouldn’t have been there. And again, so, so this is, this is awful dilemma. And we’re trying to do something useful and actually making it harder for ourselves. So I don’t know the answer there, but it did struck me as somewhat ironic.

Brandel Zachernuk: Yeah, with formalism is sort of in constant tension to the fact that all models are wrong, but some are useful that that we, we actually want to be able to put tentative frames over things in order to be able to come up with tentative resolutions. So it’s it’s it’s one of those things that we’re closer to being able to cope with within a computer when we have more resources to waste on wasteful things.

Frode Hegland: So this is next week’s agenda. I put design led by you and me. Brundle. Just to put you up there. Because I’m supposed to be responsible for that one. Discussing the data flow and metadata, is that feasible? Reasonable? Makes sense.

Brandel Zachernuk: I can only join at the time that I joined. Today, I have another meeting that sort of clashes with it, because I need to meet with some folks in Europe to argue about some stuff. But yes.

Mark Anderson: You can do that anyway.

Speaker10: That’s fine.

Frode Hegland: I.

Speaker10: Have a lot these days.

Frode Hegland: That’s absolutely fine. Okay. And then Yeah.

Adam Wern: I have to go have a have a nice whatever time it is. Yeah.

Frode Hegland: Yes. Yeah. Let’s finish. This was way over time. Greatly appreciated. See you all next week and some of your Friday. Bye bye.

Peter Wasilko: Bye.

Speaker10: Yeah.

Frode Hegland: Probably should probably exit, shouldn’t I?

Leave a comment

Your email address will not be published. Required fields are marked *