31 January 2024

31 January 2024

Frode Hegland: Oh, I can’t hear you. I’m just trying to do a last minute thing to our slide shelters. How are you now? I can hear you. Yep.

Speaker2: Of life on the other side.

Frode Hegland: It was. It’s been tough. And then yesterday, because of this thing, which we may very well end up changing, it was fine. It you know, it’s that kind of thing sometimes you just need. So I’m in Photoshop, just trying desperately to finish our presentation. Yeah, yeah. Did you.

Speaker2: Do that?

Frode Hegland: Yeah. That’s okay. Just just a little bit of a breakthrough where we have, as I think about it, like legs to stand on is really, really useful. So that’s that’s that really. I’m going to save this thing. How are you?

Speaker2: Well if fine. Got a little bit of a headache when I tried some VR stuff today.

Frode Hegland: Oh, okay.

Speaker2: It’s not perfect. And Trying. Especially when you try development stuff. It will be buggy and buggy is more, more problematic because it can.

Frode Hegland: Yeah. Yeah, well that’s good. But I just saved something. I didn’t. Yeah. So John Dennis, brilliant husband. He is not interested in trying new things we went through yesterday. He had not seen brandless implementation of the march from water because he was very happy to see that. But husband. Thank you. Way too worried about motion sickness and stuff. Which is fair enough, obviously, but, yeah.

Speaker2: Yeah, it it’s understandable because it has been a. The big issue. It’s better now, but it’s not perfect. And it in the early Oculus. I remember trying the roller coaster thing with the original Oculus. Like, was it ten years ago now? A long time ago, and it kind of the worst experience you could put someone in. Of course it’s. It’s. It feels alive and real. And it’s kind of a you trigger some sense of height as well when you go up there and falling off the. But it’s kind of. The worst experience you can do.

Frode Hegland: Yeah I agree. That’s absolutely. Hi, Andrew. I heard Danny had some friends over. Game developers like top notch game developers for the first dinner was amazing to meet them. One thing they pointed out is in their experience, and people who know an area or domain more are more likely to have motion sickness if it’s done in VR. So pilot will more likely not deal with Flight Simulator because they have so many different kinds of cuz they said that aren’t present, so their brain is more messed up. That was a surprise.

Speaker4: That makes sense. I hadn’t thought of it that way.

Speaker2: And it may also explain that you could adapt somewhat to it a bit, that you learned that in your brain starts to learn how the headset behaves, even the quirky things. So you get less surprised and therefore less. Yeah, kind of shocked.

Frode Hegland: Right. Look, look. Look, everyone. Adam, look! Hey, Danny. Long time no see.

Dene Grigar: Good morning everybody.

Frode Hegland: Morning.

Speaker2: Morning. Evening.

Dene Grigar: How’s everybody today?

Speaker2: I’ve got some VR sickness.

Dene Grigar: You did?

Speaker2: Yeah, yeah. It’s I’ve been trying different things, and the problem with trying unfinished things and own own things and others and so on is that. They are more likely to trigger VR sickness because work in progress are, by definition, work in progress and not so that is. That we should really, but it’s good to be a bit thick when you develop as well, because then you’re really then you will be compassionate and design for something that could work for more people. If you really have sea legs or VR legs, then you would. You risk doing things that are crazy.

Dene Grigar: I agree, you know. And also, we’ll be interesting to compare the sickness with the meta, with the Vision Pro right to be able to give some advice to people about how to mitigate these things, and maybe we offer some settings. They can help people that that are have a propensity towards sickness. There are people who get dizzy over anything, right? I don’t get motion sickness at all so I can do weird things in VR. That doesn’t bother me. So I should not be making. I should not be making VR environments. Morning, everybody. Peter. Mark. Hi, everybody. Welcome. Welcome to a Wednesday where we have a real agenda. We’re ready to start on onboarding week. Good to see everybody. And I’m going to set my settings so we’re at more of a gallery view. There we go. I like that view better. We look like the Hollywood Squares. If you know anything about American television. Oh, we’ll give everybody a second. More so, Frodo and I’ve had a very busy week. It’s hard to imagine this is only Wednesday. It’s three more days to go. So but we’ve managed to get a lot of work done. And while we’re waiting to get started, I’ll say that we had a lot of meetings with folks on my campus to get stuff kind of straightened out for expenditures, like, how do you buy a headset for someone in the UK? Tricky stuff like that. To something as simple as this is how we’re going to handle the the expenditures in an Excel database, you know, spreadsheet. So that’s been very, very helpful.

Speaker6: I’m in.

Frode Hegland: I’m in our base camp and I’m, I can see in the overview and document files Wednesdays agenda. So I’m gonna click through. And Oh, there it is. It’s kind of hidden.

Dene Grigar: Yeah. So what’s going to happen? Yeah.

Speaker6: Yeah. No, it’s.

Frode Hegland: Just odd the way.

Dene Grigar: Can I show you how to pen things in the base camp? You can actually pin things, so it makes it really easy to find. So that might be something to try. And then also when we have another agenda, I’ll make a file that says past agendas. And the only ones showing up will be the agenda for the week. Right now you have two agendas and one is the test agenda.

Speaker2: We should. It’s the wrong date. I think it’s yesterday’s date.

Speaker6: Yeah, it should be the 31st.

Dene Grigar: Yeah, it should be 31st.

Speaker6: Yeah.

Speaker2: It could be different time zones as well, but we are on the same day, I guess. Now.

Dene Grigar: I will fix it. And now there’s a new version. So if you refresh, you’ll see it.

Speaker6: Yep, yep. Thank you, thank you.

Frode Hegland: A teenager just as a tiny little point of elegance. The folder or sorry, the document name. Can you just change it to agendas? Not at Wednesday agendas because it just looks better on the overview screen. We only have one day with agendas anyway.

Dene Grigar: That’s fine. You have me do that too. Let’s do it right now.

Speaker6: Thank you.

Peter Wasilko: Okay. Where should I be looking? In base camp. I’m on the main entry point page. Now.

Speaker6: There’s something called. Can I show you?

Frode Hegland: I’ll show him. While you do that, there are three main things on the top message board to DOS and docs and files. Do you see that, Peter?

Peter Wasilko: Hang on. All right. Home. Hey. Line up pings, activity.

Frode Hegland: You might be in the wrong.

Peter Wasilko: Okay. Okay. Message board to dos. Okay, I see it now. Message board to DOS. Docks and files.

Speaker6: Yes.

Frode Hegland: Docks and files is where it is. So perfect.

Peter Wasilko: Okay. Docs and files agendas. Okay, I see it now.

Dene Grigar: Very good and know this when I post the agenda, when I when I finish the agenda, it automatically populates in our slack channel. And I did that. So you don’t have to go into base camp directly. Sometimes you just click. If you wanted to just click on the link and slack, it’ll take you right there. So in the future you’ll be noticing that and make it a lot easier for yourself.

Speaker6: Oh. Very good. Hallmark.

Dene Grigar: Everybody there.

Speaker6: Well.

Dene Grigar: Shall we get started?

Speaker6: Well.

Frode Hegland: I think Alan is out with his dog. Or what’s happening.

Dene Grigar: He might be walking his dog right now, so he’ll show when he is ready. The dog is. So you’ll know that Alan has a dog that’s, like, 15 years old and very feeble, and he can’t walk up and down the stairs, and he sleeps upstairs. And so Alan has to carry him downstairs and walk him. So that’s been that that’s. And he seems to be needing to do that. This dog is not doing well, so. And those of us that don’t have children. You can understand that sometimes we get very attached to our pets.

Frode Hegland: Okay. Yeah, I do need. Please lead the way.

Dene Grigar: So the way the, the the Wednesday meetings will. Unfold is that we will start with announcements, and every week there’ll be some sort of announcement probably. Then we’ll spend time on the building section. And this is the dev report led by the dev team. They’re going to be talking about what they’re doing. And then there’s time for feedback and questions. Then we’re going to turn our turn our attention to designing. And this is interactive explorations non domain specific interactions first and then 20 minutes max and then user story mapping. And that’s led by Alan. And here’s Alan. Good timing. I just that’s what you need to do. Just say his name. Hi, Alan. We just said your name. Hi. And then we’ll have ten minutes for next steps, roughly. And so this is this is meant to give you some.

Speaker6: Structure.

Frode Hegland: For Alan’s benefit. Why don’t you just go back since it’s not a lot, but it’s worthwhile. Please, Alan, we’re on the agenda for today and Basekamp. Have you been able to access that? Some of us had some issues.

Speaker8: Yeah, I’ve seen it.

Speaker6: Okay, good.

Frode Hegland: So we’re just going through that now to discuss the initial idea for the agenda and then to see what changes we may want to make.

Speaker6: That your dinner?

Frode Hegland: Do you mind running through it again, please?

Dene Grigar: All right. So if you look at the agenda, you see that we have about ten minutes for announcements. We may not need that much, but ten minutes is the max that we’re going to be spending on announcements. And every week there’ll be probably something to announce updates to the conference that the symposium updates about the book, those kinds of things. The building section, 30 minutes max. This is when the dev team reports back on things that they have developed since Friday when they met. And then there’s time for feedback and questions. And we want to give them feedback so that when they when we send them off after Wednesday, they know what they’re going to be refining, what they’re going to be developing. The designing section is next. We’re looking at interactive explorations first. And and then we’re going to be looking next at 20 minutes, you know, Max for that. And then also user story mapping. And this is led by Alan. This is not to say that designing is separate from building because in my the way I project manage my teams, the dev team works on the coding, but the design team gives them the figma, gives them the color palette, the mood board, the typography, the actual Figma design, you know, front end design style, all that material. They they make that and hand it to the dev team. And also we embed a designer on the dev team to work with them to make sure that it’s been, you know instantiated in the right way. Right. But we do have two separate teams. We have a design team and a dev team. And so we’re looking at bringing those together. But they also have their own activities they’ve got to do that’s separate from programing or separate from from actually working on front end. Right. And then we end with next steps ten minutes roughly. And then we. Then we’re done.

Frode Hegland: Yeah. Just to elaborate a little bit on this and we’ll look at the structure and roles. Now the structure is simply look at the roles, the dev team. This may be slightly too fancy a name. It goes with all due respect to a brilliant dev. It is more coding. All of this is development. So and all of this is designed so. This is not about shutting people up in different areas. It’s more about areas of responsibilities. So should we just look at the roles next. Now does that make sense. Right. Or structure. That’s just three structures first.

Dene Grigar: So let’s look at structure. The other thing I was trying to do with Frodo is figure out for myself. The relationship of this Sloan project in conjunction with what’s been happening for the years that you guys have been working together. And so what you’re looking at here is a the top part is from the Future of Text Project, right? This is what he’s been working on for years. That’s in black. If you look around the the left hand side, all the activities, the weekly lab sessions on Monday, Georgia Tech symposium, the book, all of that Grodas you know what you guys have been doing with Frodo for years now. We’ve got this grant. And the question is, how does this fit in this, in this realm that he’s already developed? And so the way this works out is that Sloan is actually funding now some of these activities. So the dialog, there’s, you know, aspects of the of the grant that involve dialog and that dialog funds us being able to get an editor for the book to pay for the venue for the symposium. And also to let us kind of promote we’re getting a social media person to promote this work we’re doing and hopefully build the Monday groups.

Dene Grigar: Right. We’re looking at building that, that lab discussion by bringing in other people. It’s also funding the VM and the XR. The XR has two components, and that’s the Wednesday lab feedback sessions. So there are people that may not want to come to Wednesday, but I’m. But but we’d love to have everybody here. It’s another two hours of your life. So I totally get how this is hard. But the Friday meetings are for the VR dev team, so think of the dev team as VR developers. And Andrew’s been building VR projects for quite a while now. So. And Adam too. So they’re, they’re the kind of like the major players in the dev team. But we’re also looking at, you know, user testing, bug testing. There’s a lot of other components involved in the dev team. And so but this is the structure as we’re trying to envision it. And the funding from Sloan pays for everything that’s gray, right? With our supports, things that are gray. That makes sense to all of you. It helped me. I mean, it helped me a lot. Be able to talk about this to my colleagues and talk about this to my administration. Comments.

Speaker2: Perhaps we we could add a few. There are a kind of community, some community projects as well, or has been like, Mark and I have done some things and we have been doing is they are there as an invisible box, that it could be good to have that box to see that. Can I ask.

Speaker6: You.

Dene Grigar: Adam? Absolutely. What is now? This is a starting point, right? It’s not the final draft. What I’m hoping we can do is start drawing boxes to all of our projects, because one of the projects I’d like to do is an art project out of this, you know, and that would come out of, come out of this, this environment that we’re building. What do all of you have that that builds off of this. And so take this, take this structure, download it, mark it up and give it back. And I’ll continue to refine it. Right this. So what we’re showing you today is not like this is how things are going to be. It’s like here’s a draft. We got to start somewhere, you know. Yeah here’s here’s what I’m imagining, you know, further. And I sat down with this and we’re like drawing these lines and I’m like moving these lines around. And he’s yelling at me about, you’re using pages. You should be using keynote. And, you know, some pages.

Frode Hegland: It’s a miracle that it works. But but but hold on. What you’re saying though, is really, really important because it’s that issue. What you just mentioned is basically the key to the whole project. When there are other areas of interest that relate, we need to find the best way to put that in, because as Denny has said, and I will emphasize, there can be many other grants coming out of this. So if somebody scoped out a thing, you know, to have that properly recorded here will obviously be beneficial to show what it is in the future. So please email or slack or whatever the heck you want for this to go into the boxes. Makes sense, right? Yeah. Peter.

Peter Wasilko: Yeah, maybe we could have a box for out of current scope side projects.

Speaker6: Yeah, that’s. That sounds.

Frode Hegland: Very. Alan. That’s. Yeah, that’s a very good idea.

Dene Grigar: Peter, I’m going to be teaching a course in the fall called The Future of Text and XR to my students. That’s going to be another box like pedagogy.

Speaker6: Right.

Dene Grigar: It’s a side project, you know, but it’s it’s associated with this. It’s bringing students into my research, which is really smiled upon upon at my university. Right.

Frode Hegland: So let’s go and look at our roles there, because that’s also a very rough. So if you click back and click on roles, well, you all know how to navigate. Am I doing. Danny, can I do the first overview of this one? Okay, so it took us quite a long time to just even think of rough titles for ourselves. And I’m called creative director primarily because I’m not good at any of the specifics. So I just thought that was kind of funny. And Dina came up with that. And Danny is by the fact that her name is on the project, she has to be operations, but she is also much wider than that. She has the academic insight. And here’s an artist. So that’s nice. Now dev team really is coding for XR. Peter, I understand from our conversation that you would like to be part of that team. And of course you’re welcome to, but we’re not going to be doing things that aren’t XR at this point. So if you want to contribute to that, it’s moving you up as a cut and paste. They’re also going to be. I’m just jumping to the bottom now. We’re we’re just put you and Mark under metadata and process. And the reason is Mark is very good at how things should flow with data. So again, all of these are for discussion. We haven’t put any we haven’t made any stamps and stamped anyone’s foreheads. Right. And then understory mapping team. And I’m surprised Rob is not here. I hope he will be.

Speaker6: Alan.

Frode Hegland: Obviously you are in charge of that if you want to, but if if we go back, please, to the agenda, because this is where the rubber hits the road, right? This did not make it easier for me to see on my own screen here. The idea. With the agenda today, we have obviously going over ten minutes to talk about our own process, but after that, it’s basically Andrew and Adam showing what’s been built. And if someone else in the team has built a thing, showed that too. That’s by far Ben and Brandon were listed, even though Brandon probably will not code exclusively specifically for this project. So the show, the thing that built feedback and suggestions on then to decide what to go on after that. The reason we’ve separated the design. I need to be really, really clear. And of course, you know, questions are good. By the way, how bad is the background noise? So it seems a bit loud to me. Are you okay? Can you hear me? Okay. Yeah. Sorry. Adam. I’ll go to a different Actually, this is the last coffee shop day that I’ll be. I’ll be in campus after this anyway, but two different designs. There is the important user story mapping, which is addressing the why are we doing this, for whom and how? These are the really, really important questions. We’re also doing more generic stuff such as. How do we get stuff into the headset? How do we access a control? These are things we should all be as involved with as we possibly can. Because it doesn’t matter if you’re an academic or whatever you are, you will need these basics. That is why they’re different, though of course we expect they will inform each other. Ellen, considering your expertise and passion with Nathan. And do you have any comments on this so far?

Andrew Thompson: I understand that it’s a draft, I. I feel a little my impression of of it as a draft is a little bit like you invent a car and saying, like, these people are in charge of the engine and these people are in charge of the wheels, but not really understanding what a car is. Right? So like, I want to make sure that that what I’m not saying that I understand what a car is. But like. The there’s a the user story mapping is good that it’s so specific, but it’s almost it’s like one tool in the tool belt that, you know, that may be useful at times, but it may not be. And that next to the design experiments, I, I don’t Understand at the moment. How? That leads us to, I guess, going back to. Some of the original questions is, you know, when when you have a process, there’s kind of like everybody agrees to a sort of mental model of how we’re going to go forward with this. And one way is to just, yeah, do experiments and throw things at the wall and see what sticks. And another is kind of like. What do we think are the good bets? You know, what do we want to move towards? So I’d like to. I’d like to. Hear more about that. I guess when I think about doing user story mapping and it being separate from.

Andrew Thompson: I don’t know when I, I work very closely with the dev team, so it’s

Frode Hegland: All right. You have some questions there. And I think they’re very, very, very good questions. First of all, the reason why nothing in this list is lost is because it should take the most time. That’s why it doesn’t have a time next to it. So the other ones are constrained. All of these fully overlap. There’s absolutely no question about that. Anybody can contribute to anything. There will however, be some

Speaker6: Sorry.

Frode Hegland: I’m sorry. There will, of course, be some homework, so to speak, by some of the groups, like the dev team. They also work Friday, and they work other times on weekends. That’s something that shouldn’t be imposed on anyone else. But if anyone wants to code something and they talk to Adam and Andrew, who are kind of architecting it, or if someone wants to really discuss the architecture, that’s absolutely great. And this is not meant to segment and reduce. What it is simply meant to do is to make sure that we have openness for that. Now the interactive explorations are later on. Today I will show something that we went through yesterday, Andrew and me. It is a proposal for how to do a basic thing, but it’s so basic that user mapping will probably change it, but we still need to have a free form. Hey, I saw this thing. Isn’t that cool? Time. That is a freedom that user mapping will help us focus on. Does that make sense at all?

Andrew Thompson: Would it be better to call the user story mapping section whoever like totally inclusive, right? But calling it essentially design and strategy discussions?

Dene Grigar: That sounds fine. That sounds great, Alan, I like that.

Speaker6: Okay. Great. Absolutely.

Dene Grigar: I’ll make that choice change right now.

Frode Hegland: I see your hand, Peter. But just to finish a little bit with Alan. You see, the thing that I find is a point of stress for me right now is your perspective, Alan, is absolutely crucial. And it has to be there. And it has to not be, because it is an organized, thoughtful thing. It also needs to give a bit of space for just experimenting. And I wouldn’t want these to be in the same bucket, because then it’ll be very confusing. That’s why it’s it’s separate. So when we do user story mapping, we have the freedom to not think about anything specific outside of what we’re doing. That’s really the key.

Speaker6: See, this is this is where.

Andrew Thompson: This actually kind of supports my point in a little bit because in a way, because that suggests to me that we actually have two different understandings of the word user story mapping. I use it as a form of understanding, a world outside of myself, not exploring possibilities of what I want to see. Right. So having a more open phrase or more open a way to pursue it, I think allows both to exist because both are necessary. It’s just I have a clinical definition of user story mapping as being a, a way to honestly, it’s, you know, when there’s money behind the product and it’s assumed that there’s a problem that’s being solved, then you take this route already. We have to change those rules because we’re not it’s not a standard project in that way. Right. And it is more emergent and it is more exploratory. So user story mapping or maybe that would be close. Lowes had a thing, which is our when I was at Home Depot. They were our enemies. But. They would have narrative driven design, which is actually a really good technique for this kind of thing. And it would be imagining a story with technology you can’t have in your hands yet to see how it might play out.

Speaker6: Yeah.

Dene Grigar: So, Alan, that’s great. Let’s let Peter talk and then we’ll come back and kind of wrap this section up and have some next steps on this. All right. So Peter.

Speaker6: And then okay

Peter Wasilko: I was picturing when I was hearing about the story mapping and all. Something more along the lines of building up a state chart to represent the course of interactions with the system. And let me just drop in the actually, I’ll drop in in slack. That’s probably a better place to just drop a link here. Let’s see. Slack. One second. Ftl lab. There there’s a software tool for building state charts in JavaScript that also provides visualization hooks. And something like that might be a useful tool for us to use. Also, I was wondering if we could get a single shared author document or a Zotero group that we could start putting all of our citations into so that we’d have one canonical point where we could see all of the references that everybody in the community is using to hook into the literature.

Dene Grigar: Okay, this sounds great.

Peter Wasilko: Also just like another process thing, I find that bookmark is an invaluable tool. And what I’m doing now when I’m coding is that if I punch up a web page or some documentation, I’ll copy a bookmark link to that and link it to the source code document that that information went into. And then hopefully at some point we might be able to automate pulling out all of those bookmarks attached to the document and feeding them into whatever papers we’re writing up.

Andrew Thompson: Peter, can you throw that into the slack channel? I’d love to. Okay. I think all those points are valid. We need to figure that out. And and I want to hear more about the hook marks. That reminds me of something real quick. If I could say before moving on. I think it’s a. Really early goal for these Wednesday meetings and perhaps, I don’t know, the Fridays. Is to. Figure out how to make sure that we’re not spending we’re not wasting time. Basically, one of my biggest concerns about the user story mapping angle is to put together some kind of map and then realize that the background constraints have changed and that whole amount of information is worthless, which after that’s happened a few times in your life, you’re like, I’m not going to do that again. Right? So yeah, efficiency would be excellent.

Dene Grigar: Yeah, I agree with that. And so the projects we do in my lab a lot of them are time based. Real hard deadlines, and some of them are not. Some of them are like, we have as much time as we want. And so we’ve learned how to scope. And how to cut, you know. And so I think we’ll be fine with that. But I it’s good to remind us about that, the hard deadline issues. And we do have a hard deadline, right. We have one year to get this first part done. And my from from photo and my standpoint, we actually are trying to push the first year earlier. So we have our own kind of in our mind deadline that we want to make. Like, I don’t know, September for Poznan. And we’ll be developing a timeline based on these things which will give to all of you to look at and comment on after today and after this week when he’s here, we’re doing two more. There’s two more I should mention, Mark, before I turn to you. There’s two activities we’ve got to do yet, and that is the naming conventions. And we’re going to be working with the dev team on Friday for some of this, like how do we set the naming conventions and this fall structures that we need. And secondly, we need to do the timeline and we’ll have that’s our.

Andrew Thompson: That’s a thing I would like to be involved in, whether it’s on Friday or not. But naming conventions are they flow both ways.

Dene Grigar: So yeah. Yeah. What I thought we’d do is start with the dev team and then turn it out to everybody, and everybody will have that document. They’ll mark it up, and then we’ll have a final version, you know, in a week or two. Right. There’s no rush on the naming convention because we’re not producing anything yet. Right. So we do have some time to develop that, a really good solid naming conventions. And this is the exercise I’m doing right now for the big project I’m doing with the the university. The university has asked us, my department to do their giant fundraising recruitment campaign for 2024. And so I’m in the middle of building that right now. We just got the teams together. We got a rough we know what our hard deadline is, but we have a set of deadlines we’ve got to do to get to that final deadline. So I’m in that exercise right now and hopefully we’ll have something for them next week. Mark, quick comment and then can we go into.

Speaker6: The next section. Yeah.

Mark Anderson: No, it’s it’s very quick. And I’m putting in here because I’m not quite sure where it fits on. This is one of the things is how do we know when we’ve completed something? I’m in this in the sense because quite a lot of what we’ll do is exploratory. Where what how do we record? We passed the threshold okay. We now know we could do this thing because we won’t know. We won’t know initially why we were going to do it because some of it is deliberately exploratory. And the other quick thing that goes alongside that is when we do something like, okay, we’re going to put a file into a space, which file did we pick and why even, even if, even if the why is it’s just the one I had to hand. I think that’s very important. Because that’s useful looking back. So we can see what what is and isn’t covered. And we don’t have time to cover everything. But but it means we can have a more meaningful discussion at the end with the funders and things. That’s it. Yeah.

Dene Grigar: We usually put together a spreadsheet of all the assets, and we’re right now building that spreadsheet and assets for this campaign that we’re doing with the university. And so but before we create a spreadsheet of assets, we have to know what the hell those assets are. And so we do some asset identification in some exercises in the near future. Mark. Absolutely. And then we assign those to certain people. And when we mark that off and as project manager that’s operations person. That’s going to be my my job.

Mark Anderson: Yeah sure. And I don’t make the observation to make the tail. The tail shouldn’t wag the dog here. It’s just to make sure it’s somewhere in the fit. But that sounds covered. So I’ll shut up. Now.

Dene Grigar: There’s so many things we have to do, Mark.

Frode Hegland: Oh, Mark, please don’t shut up. And on that point. First of all, I’m very glad we’re going to have weekly agendas. So what I think we can do is at the bottom of the agenda. If there’s something we’ve agreed on right at the end of the agenda. What do you think, Denny? If there’s something quite firm, or at least because we follow an agenda, we have a recording of this. At least that is one way of of putting it down. But we do need to work on exactly that, and I expect Alan will contribute to that method as well. Now I have a more important question to you.

Speaker6: Mark.

Frode Hegland: Did you get a Southampton hoodie? Because I have a Washington State hoodie now, and I don’t have a Southampton one.

Speaker6: So. Okay.

Frode Hegland: Just wanted to check. Peter, please.

Peter Wasilko: Yes, I found that I sort of adopt a scatter gather methodology when I’m developing things. So what I’ll wind up doing is I’ll start spinning off little clones of a blank starter template. I’ll get one single component working, and then I’ll wind up with a folder with a whole ton of these little experiments. Then I shift into gather mode, where I start trying to cherry pick the pieces of each one of those little separate standalone demos and merge them back into the base template. So. We might want to reflect that in a naming convention. Oh, and also it helps to put the date and the components that you’re relying on. For example, I’ll go year, month, day. I’ll have a dash, and then I’ll have the names of any key libraries or technologies that that particular piece of code depends on another dash, and then the actual name of the file afterwards. Well, the name of the folder that the project is in afterwards. And that’s really helping to organize and gather those things back together at the end, because I can say, okay, you know, I had three different attempts over the last two years at using Neutralino as a basis for building desktop apps. And here was the order in which I tried them. And I can go back and see which one worked, whether something broke with a newer version of Neutralino. So now it doesn’t work anymore. And that building the time frame into the file structure really helps.

Speaker6: Thank you. Thank you. Peter.

Frode Hegland: I think we’re done with the intro. Should we go on to the actual agenda? Are we cool with that? We do. Update from Andrew and Adam.

Dene Grigar: Yes. They’ve made some progress.

Speaker2: You want to. Yeah. Start Andrew because you you have the most you you have the visual things and and and I really think we should start start there. And actually you had had a thing to show last week, but we talked over your presentation, so. Yeah, that was that was fine.

Andrew Thompson: The dev progress is always slower than non devs expect. So if I end up skipping a week for presenting, it’s not the end of the world. I’ll just have more to show the week after. So nothing particularly exciting, but I’ve just been working on basically getting the hands to do what I want them to do. Inside of VR and all the gestures are currently up for change. And that’ll be potentially a good discussion point for later. I can’t stream from the headset, so I put little video clips into a slideshow here that I’ll open up. But in case my internet makes it laggy, I also have a link here that people can follow along if they would like. Let me screen share here. And I’ll kill my video to save bandwidth. Okay, so the first thing is just a basic finger pinch. Hopefully you guys can see the screen. And this is as simple as just you pinch your fingers together to grab objects and you can pass it between your hands. When I was doing some initial testing with this project, but also before this project on some of the game projects I’ve worked on people intuitively want to pass objects between their hands, especially if they’re larger. And often VR does not support that naturally. When you are holding something, the other hand can’t take it. So that was something that I feel like should be done the other way around. I do have most of the testers that I bring in are people who have never touched VR before. So we are coming in with that expectations of reality rather than game expectations. So passing between the hands is they want there is a downside with that. Because now you’re not holding it with both hands at once. Things like scaling have to be implemented in a different way. So this is, of course, for discussion. But this was my my initial findings with the grab. And I chose pinch rather than a whole, like, fist grab, because pinch is kind of the default grab interact in most headsets for UI stuff. So I wanted to carry that over. So it’s consistent inside our our application.

Dene Grigar: Andrew, is this is this done inside the headset?

Speaker4: Yeah. So the little video clips are recorded in the headset is what you’re asking I thought.

Speaker4: Yeah. It’s just not live right now. Alan, I see your hand.

Andrew Thompson: Yeah. Thanks, Andrew. So first question, I’m assuming this is in Meta Quest three.

Speaker4: This is in two. Actually, it’s all I have at home.

Andrew Thompson: Okay, great. And a broader question. I love what you did already. Is impressive. This is maybe for the group or for yourself, but I’m curious as to what are the common. Signals that you can do with your hands. Like for, for instance. A signal I was tinkering with would be like using my three fingers to press my palm, but I’m assuming that’s not a thing that can easily be registered.

Speaker4: Yes. So I can code in kind of whatever gestures we want. And you’ll see some of those come up in a bit in the slideshow. The only one that’s built in by default is the pinch. That’s that’s the only thing that XR supports natively. Because it’s so common. So I feel like we have to use that one. Now, the quest headsets, I’m sure you know this quest headsets, if you point your your palm towards yourself and pinch it, opens up a menu, that’s not something we can override. And that’s only in a quest headset. So that’s kind of like at a higher level. Oh that’s excellent system. Yeah.

Andrew Thompson: You’re right. Yeah. So okay, so a document that would be probably useful for you would and for others would be like, here are the, as you just said, the common gestures and the, the higher level gestures, which you can’t do. And then okay. Great. Thanks. Yeah.

Speaker4: And from what I understand, talking to Brandel Vision Pro does not have that palm pinch. We still don’t want to use it because it would conflict with quest. But it has a, like you look up and then, like, you get a menu. I don’t think that’ll be a problem for us, but those are the the two main ones I know about right now. Okay. So I’ll go to the next slide here. So that’s just the.

Speaker6: Andrew, I.

Frode Hegland: Just have to ask you, this is brilliant. And I’m very happy to see it in your headset in the lab. But is this something that the rest of us can use in our own headsets right now, or is it only in dev mode?

Speaker4: And this is over on the the FTP server for the future text lab? I’ve put it in my, my dev folder, so. That would be right in there. As well as last week’s update, but last week’s updates kind of buggy.

Speaker2: I can I can post the links while you continue the presentation. Should I put them in slack? Yeah. Go ahead.

Speaker4: Thank you. Adam. And Peter, I see a hand before I move on.

Peter Wasilko: Yeah. I was wondering if this will be exposed as high level events. So could I be writing a piece of JavaScript code and have, you know, an on pinch handler or an on left pinky to thumb handler at a high level of abstraction like that?

Speaker4: Yeah. Right now it’s not my code’s really gross because I’ve been prototyping and just trying to get stuff work. That’s a good idea. That’s probably how it should be done in the end. But currently it’s not. It’s just getting stuff functioning.

Peter Wasilko: Fair enough.

Speaker4: All right let’s move on to the next one. We’ve seen pinching enough. So this is my temporary solution to scaling because you can’t hold the document with two hands at the same time because of my previous assumption with grabbing all of this can change. Of course. So I have a two finger point. And you’ll you’ll see here, I’ll. I’ll demonstrate the gesture there. And if you point at the document, you can scale it like that. I originally attempted one finger point and that kept triggering accidentally. Because when your hand just kind of rests, you naturally kind of point sometimes. And that was starting to scale documents by accident, and it was really frustrating. So I switched it to the two finger. Still intentional, but is less trigger able by accident. And for most of my testing, that makes a lot of sense. And it works well, except the finger tracking is a little bit inconsistent. On the quest two, I’m sure it gets better with the quest three and even better with the vision, so I’m not too worried about this. I’ve tried often.

Speaker2: I’ve tried your prototype in quest three, and it works. Quite well already.

Andrew Thompson: Okay, that is wonderful to hear. Because the scaling is like the most inconsistent part right now for me. It often loses my middle finger when I put the two fingers together and it breaks the scale. That’s also because I have bad lighting where I work. But if it’s if it’s more consistent for you, Adam, that is a good first step. And we can consider changing this. We could make it three fingers. That feels less comfortable though, with your hand. One thing we originally talked about was putting all five fingertips together to make kind of a point. But also, no need to add extra gestures to this if the two works fine. So something to discuss, potentially. Any comments before I move on to the next bit.

Dene Grigar: This is great.

Speaker6: This is awesome. Homework.

Brandel Zachernuk: I joined late. So I’m curious, so are you using evaluating based on kernel factor and stuff like that for getting, getting recognizing the two finger pointing and stuff?

Speaker4: I, I didn’t quite hear what you said there, Randall. With what I’m using what was that?

Brandel Zachernuk: Yeah, I are you using kernel factor, like the the. Some other geometric mean is the dot product of the.

Speaker4: I’m actually I’m just doing distance between. So if the two fingertips are close enough to each other and they’re far enough from the wrist, it means you’re pointing. It’s really simple and it works really consistently.

Speaker6: Okay? Okay.

Speaker4: And then it just casts a little ray from the tip.

Brandel Zachernuk: And the ray is the one from the wrist to the fingertip.

Speaker4: Right now it’s from. I think it’s the second bone of the fingertip and then points slightly past the finger. So in case you accidentally stick your finger like, into the document, it still grabs it. But it’s not from the wrist, so you can still, like, move your fingers rather than your entire hand to scale. We could change that if it doesn’t feel right.

Speaker6: Yeah.

Brandel Zachernuk: So I can give you a little bit more. So, like that curl factor metric. I think maybe Adams played with it as well. I can give you a rundown on that sometime. Sure. Do you think that would be better? Whether. I think it would give you more freedom to understand what is a point. So you. So you’ll be able to I mean, not that you want to use single finger point for this, but you would be able to identify single finger point with with fewer false positives, as well as to have a more robust notion of what is a fully straightened finger rather than a mostly straightened finger. So yeah, we can chat about that. But this is awesome. Really great work.

Speaker6: I’d love to hear about that as well.

Andrew Thompson: Brandel. Or if you have documentation, so many questions about how one would figure that out.

Speaker4: Yeah, yeah, I would I would love to learn about it too. Maybe a Friday conversation. Yeah. But I’m curious, does it still allow for kind of like a range because right now with the distance I can give it a slight range. So you don’t have to be perfectly stiff finger. You don’t have to have the fingers completely touching. They can be close, which is as you’re moving fast. It’s more useful. The downside is, of course you can trigger it easily when you don’t want to.

Speaker6: What had said. Now.

Brandel Zachernuk: He’s wearing the Quest Pro. So I don’t I don’t want to derail it, but are you familiar with the concept of a dot product of a unit vector?

Speaker4: Less so. Sounds vaguely familiar. I’m sure I’ve worked with it. Don’t remember what it is.

Brandel Zachernuk: So a dot product is if you have a vector which conveys a direction, so x, y, z then the dot product is the is when you go a x times x plus a y times b y plus a z times b z. And what you find if those are unit vectors, i.e. if their lengths are one, then they’re the dot product is equivalent to where is the cosine of the angle between the two vectors. And so it’s a really great test for collinearity of those things. So you can use it to figure out. You can use it to figure out the angle between things in three dimensional space. That’s really.

Speaker4: Useful.

Brandel Zachernuk: It’s incredibly useful. So what it means is that not only would you be able to use single finger, but you’d also be able to determine the curl factor on the other fingers in order to require that they have a minimum amount of curl in order to, to do those things. And it would also be able to distinguish your screenshot, but I’m doing a finger cutting gesture. You’d be able to determine the horizontal angle between two trigger balloons in my feet. You’d be able to determine the other angle axis of the dot product between the two two fully straightened fingers based on whether they are parallel or convergent or divergent, obviously, you can’t converge your fingers, so. Yeah, that’s useful. Robust. Really, really interesting. So yeah, I’ll go through it in more detail on Friday.

Speaker4: Yeah. Something like say the the Spock. Live long and prosper. That would not work with the the way I currently write this code. So what you’re talking about would absolutely fit that. I don’t need that gesture, but, you know, it’s it’s still a good way. And there was probably going to be a gesture that will require it at some point. So. Yeah, I’d love to talk about that. Friday, maybe more in depth. You can show me what you’re talking about.

Brandel Zachernuk: Yeah. The other thing about it is that it it gives you the ability to build much more sophisticated classifications of gesture shapes without having to lean on machine learning based pose training, which, you know, like once you actually have the pose of the hand, then you shouldn’t need to use ML to go. Is this doing a peace sign? So yeah, it’s pretty it’s pretty constructive. So awesome. But this is this is really cool. And I’m really excited to see such robust work already.

Speaker4: Wonderful. Thanks for the feedback.

Dene Grigar: Wait, wait. There’s more.

Speaker6: Yeah.

Speaker4: All right, so this is a.

Frode Hegland: Oh, sorry. Andrew, I have to interrupt you, too. It’s fashionable to interrupt you today. I just tried this on the head. I tried it in the headset, and it’s really, really amazing to be listening to you and actually do this. So what I’ve done now, I’ve taken the link that Adam posted in slack and linked it on our website. That means that for all of you in your headsets, just bookmark Future Text Lab and it’s really quick to find the link. So anything that we do will be listed there. Because to get to slack and copy and paste, it’s a mess. So this was amazing.

Speaker6: To see.

Speaker2: From just one side question on that. How public should we do be with early work? Because that will be public. Compared to slack where it’s well, the URL is not totally secret, but it’s posted here only so it won’t be picked up. What have you discussed with that?

Frode Hegland: It’s entirely up to you guys, but my feeling is that we can make it completely public. However, it’s not on the front page. You have to dig through a few pages to find it, which is something we’ll do. You know, keep that page, but it will be updated all the time. But once we have a new version, I’ll delete that link. Add a new link. Because otherwise, to get access to kind of a sacred thing in the web is difficult. And everything you’re doing here is fantastic and almost embarrassing. But if you want it done differently, I’m perfectly happy to adjust that workflow.

Speaker4: I would almost say this isn’t necessary, but. Maybe don’t delete the old links. You could keep them there for archiving because it’s fairly common for dev projects, say on GitHub. Everything’s kind of hidden at first and you’ve got all your updates and then once it goes public, anyone can still view those old versions. People rarely do. But it’s kind of interesting to have an archive showing like, oh, hey, at this point we can see how this was the intention and then it changed after like this week. I don’t know. It’s I just find it interesting looking back at how projects adjust over time.

Frode Hegland: I think that’s fantastic. And the other thing I would like to ask for anybody doing these in the team, if you want to give it a title, such as testing this thing or whatever, please do that and I will add that as well. It’d be easier to go through in the future rather than just a link. But now I’ve just updated the page and it says 31st of January 2020 for link. And it’s amazing. And I will make a huge effort to please continue under.

Andrew Thompson: Right. So this is a simple feature. And it’s kind of unnecessary if we’re using PDFs because you let go and it stays in place. But I find it more natural feeling if when you throw something, it actually carries the velocity for just a moment. So I have here a basic velocity dampening and then like an auto rotation, which I want to polish a bit. But if you decide to throw the PDF instead of just letting go, it’ll kind of drift a little bit and then rotate to kind of face the camera. So it’s it’s not at a wonky angle.

Dene Grigar: This is what we did with Rob’s project. So we had panels that we could interact with and we could throw that. We had 12 panels hovering in front of us. You could pull one in front of you, and then when you’re finished, you throw it back and it would move and then settle into a natural position.

Speaker4: Yeah. Denise. Right. This is something that kind of carried over from the portal project. It was hard to read because naturally, you’re moving your hands all over the place. And the object should match that. But if you toss it out, it’s always at a weird angle, and kind of it hurts your eyes. So having an auto rotation is useful in that case. And I decided I wanted to try to implement it here as well at least as a test to see if it was just as useful for the PDF. Yeah. Peter.

Peter Wasilko: Yeah. Could we have a notion of an elastic attached to that document? So if we reached out, pulled it up to us, and we let go of it, it would get pulled back to the original position and orientation that started in.

Speaker4: That sounds really cool. And I do like that idea. I think the hard part would be basically telling the software which objects should snap back and which ones do you want to actually keep with you? Because, say you grab an object, you’re like, I want to move this, and then it just snaps back to where it was before. That would be incredibly frustrating. But if you just wanted to briefly look at an object, it’d be wonderful. How do you determine the difference between that? Is it different gestures? Are we adding another type of pose? Really good discussion topic though.

Peter Wasilko: Yeah, I’d say maybe a quasi mode. Maybe if you had, say, your thumb touching your pinky and your left hand while you grabbed it, that meant you want it to be attached to an elastic. And if you didn’t, you’d have the normal behavior.

Speaker4: One other thing that just crossed my mind. And I was going to talk about it later, but we were talking about tools in, like, a setting menu. Perhaps there could be, like, a lock in place tool where you can once your PDF and a place you like, you switch to a tool and kind of like, zap it or something. And now it has like a faint border around it. That means it’s elastic. So whenever you try to grab it, it will always snap back to where it was before, unless you choose to break the elastic with the same tool. I don’t know, just throwing that out there. We could have tools for more advanced functionality that’s more situational.

Speaker6: Yeah.

Speaker2: I tried kind of having an if you had it pinched and dropped it with your hand stretched out, a slightly inconvenient gesture that you don’t run into like you would drop a paper on the floor or a drop that returned it to the original position. That worked quite good, I think. But we we really need to watch out for gesture creep so we don’t populate the whole gesture space with the things that you accidentally run into. And that is hard to learn as well.

Andrew Thompson: Yeah, that does remind me of that conversation you had with me, Adam, about how potentially we could have the different documents. Remember the last place they were And then some kind of gesture would send them back? Which would be pretty cool. Once again, that is another gesture. But yeah, that might work as well anyways. Good discussion topic. If people have more, just more ideas for it, we could absolutely go for that right now. Otherwise maybe let it simmer in the back of our minds and we can pitch more ideas as we think of them.

Peter Wasilko: Pitch recognition of Latin phrases go all out, Harry Potter.

Speaker4: Love it. Okay. And then let’s see next slide here. I guess we’re watching it again. Come on. So here’s the remote grab, which as soon as you can start throwing things, you need to be able to get them back. So I have a, a remote grab. And this gesture is. Kind of the result of a lot of brainstorming sessions. It makes the most sense. Adam and I both came up with it independently and pitched it to each other, and we’re both like, hey we both had the same idea, which probably means it’s intuitive. But because it involves so many fingers doing specific things it does have some trouble with the quest two motion tracking. I once again, probably going to be solved on three and then vision. But. We’ll see. And this is a you basically make a point by pulling your three bottom fingers together. And if you tap your thumb down, it selects. So you can sort of grab objects from a distance and move them around in your space. And you get a bit of a a ray there to show you where you’re pointing for convenience. It’s a bit shaky because that’s motion.

Speaker6: Yeah.

Frode Hegland: And just in terms of the the suite of gestures, one of the things we talked about yesterday is that when a novice and an advanced user, there’s nothing wrong to optimize for the basics and something, you know, following the document away, of course, you need to get it back, but it doesn’t have to be the same level of it can be cool like you’re doing there, but it’s just a different thing. Okay. The difference between an immediate intuitive and wow, I like this amazing gesture. Like an amazing keyboard shortcut. So it’s very cool.

Speaker4: Yeah, I think I heard most of that. Your your zoom’s doing a very good job cutting out the background noise, which means it’s also cutting out you quite a bit. But yeah, that’s the basic movement. And then I got one last bit to the remote grab. I guess it’s playing again. Come on. And that is a, like, a projection. Yeah, I’ll show you and then I’ll talk over it. It’s if you curl your finger after selecting the object, you can move it towards you and away from you. Inside the space. So it’s useful for kind of pushing things away or bringing it back to you directly.

Dene Grigar: I like this idea of thinking through the gestures that you’re doing now. And I think if we could start to. So we’re inventory. We kind of gesturing that you’re doing so today, if you and I and Rhoda can sit down and make a list of the gestures that you’ve shown us today and let the rest of the team think through this, because what we’re developing here is a gestural language. It’ll be interesting to see if it continues to. Work in the other headsets. Yeah, it’s a good point throughout.

Speaker4: Yeah, I’ve been trying to make them like, avoid overlapping with system functions, and they should be universal, but they might not be. I might be making assumptions, so it’d be great to check. Yeah. I’ve been trying at least a design principle that I’ve been holding onto, and we could change it is basically, both hands need to act the same. I don’t want it to be biased towards left or right hand users. So if the right hand can do something, so can the left hand. And they do the same thing. I know that say like the quest two, you do the, the palm turn and tap the right hand. The left hand do different things. They have different menus that open up which I guess kind of works because they have different things they want to encompass. But I would rather have them be the same. So you don’t have to remember, on top of all these different gestures, which hand does the gestures? Yes. Peter.

Peter Wasilko: Okay. One thing I’ve always wanted would be a way to project my hands forward past arm’s length. So if there was some mode that would allow my hands to fly all the way up to my top bookcase, instead of having to go and get a step stool to reach the top books. So imagine if, I don’t know, maybe touching one particular finger on your hand would tell it. Now we’re moving into a deflection mode as opposed to an absolute motion mode. So then my right hand could move out through 3D space far beyond the length of my actual arm, move around to navigate to where something is, and then have the hand represent as operated further out in the space as I then. So we moved the one motion. Then the hand would project forward until it got to the point eight feet away or 12ft away. And it would then be as if I was at that location. And then once I released it, come back to its original position, you better.

Frode Hegland: You really, really need to get a headset to feel these things because they they are. Yeah, I’m kind of theoretical. It’s something very different. Obviously. I let Andrew answer, but please, please, please get a headset because there are reasons why that doesn’t kind of map onto Andrew, please.

Speaker4: Yeah, it’s it’s a very cool concept. Right. And I haven’t really seen it done before. But unfortunately there is a reason why it hasn’t been done before. And it’s mostly in the sense of you can’t really tell what you’re doing. Well, that far away. It’s it’s theoretically possible. But it’s not supported natively, so you’d have to build an entire new system on top of the hand tracking. Which is a game engine.

Peter Wasilko: You could do native hand tracking and then put up a virtual copy of the native hand that would look the same, so the user wouldn’t realize they were no longer in the native hand tracking. In order to exactly detect the hand and move it around in the three space.

Speaker4: But the solution that most developers have gone with, instead of stretching the hand, is just to move the user. So that’s why we have the remote teleport that you often see in projects. It’s, it’s easier just to move the player there so you can just see up close, then do something way off in the distance. Right. We also.

Speaker6: We also.

Brandel Zachernuk: Yeah. So we tend to teleport and we tend to indicate the hands at close distance. And then sort of provide feedback of the action at a distance. So if you see video games like Half-Life Alyx one of the things that they do a lot is show arcs to the object that that is under. Under operation, under action. And similar to the way the teleport works. And so you can see if I did something right now you know, you highlight the object and you and you and you provide a sort of a connective indicator between here and there in order to say what you’re doing here is having a, having an impact there rather than moving your hand to there. Something that I have done a little bit. What is possible, more possible in virtual than augmented reality is to be able to do do things that are called redirection or redirected hands, similar to redirected walking. And you get used to it. So I’ve had constant redirection just for the sake of it. And it’s effective, but it’s you know, it’s not it’s often not variably redirected. And so, yeah, you can, you can try it, but it’s but I would also join proto and exhorting you, Peter, to, to try a headset to, to work out what works here.

Frode Hegland: Okay, but one thing we do not want to do and again, showing you proudly my Mackintosh T-shirt, which I bought with Randall and Bill Atkinson and Bruce Warren. Hahaha. Coolest day ever. We need we want to be as little different as possible. That’s what intuitive is. Intuitive is obviously domain specific. So if people are used to certain gestures in certain things and there were things when they put on ours, it should be as expected as possible. Except for and this is when I tried to say earlier with noisy and expert user sure should be able to go beyond that, but at least in the beginning it should be very, very plain. You know where your hands are, so to speak. Mark.

Mark Anderson: Yeah, just very quickly. And it’s really more really a rhetorical question because I don’t want to get ahead of ourselves because I know at the moment we’re looking at the basic manipulation. But I’m just wondering, thinking ahead, whether the gestures we’re looking at at the moment, which which obviously many flaws object manipulation, you know, at the, at the gross level whether we see that as being used at the next level. So when you’ve got your object, in this case, your document, if we use the same things for deconstructing that, which would be the next obvious thing to do and how you pull it apart and its constituent bits in the post print world, or whether we have a separate sort of a gesture set for that. I don’t know the answers. And I it’s sort of not for today’s thing, but I just wanted to put that out there because I think that’s another interesting transition point as we move things forward. Yeah.

Dene Grigar: Thank you. Ellen.

Speaker6: So many hands.

Andrew Thompson: Great job Andrew.

Dene Grigar: This is good. This is great. This is what we wanted today. And we have. Wait, wait. There’s more. Brody’s got something to show, too. So, Alan. And then Peter.

Andrew Thompson: Okay. Yeah. This just reiterates maybe there’s a different way to think about this. This reiterates to me. How any meaningful design work or strategy work needs to have a a. Fully immersed grasp of the constraints and capabilities and potentials of each of the devices and and, and I don’t know, the appendices. So maybe that is even like. I understand the desire to separate design and dev, but maybe if it’s considered as a designer’s research into development as opposed to sitting with but just, you know, doing, you know, anthropology or something, learning that without that you get a thing. And I used to. I used to be a dev and I still code, so I know how how frustrating it can be to devs. But if you like come up with a design that’s like a search bar and it plays music. A developer will be like, what are you at? This is a totally crazy request. So we don’t want to get in a situation of of that, right? Absolutely.

Speaker6: Okay point.

Dene Grigar: Can we have that discussion as we continue this week? We’re meeting on Friday. Can we can we talk about this next week?

Speaker6: As we move.

Dene Grigar: Along. Good, because I think we want to get some other foundational materials done and and put that as part of our next steps. Absolutely. Peter, I mean

Speaker6: I yeah.

Peter Wasilko: I think we also need to make sure that whatever we do is sort of attachable and recordable. So when we’re gesturing with our hand, it would be really nice if, as those gestures were recognized, they got spit out into a text log so that we could pipe a text log into the process, allowing the system to run as if someone had been manipulating their hand based upon the recorded script of the past hand movement. And that’s a the Linux approach there.

Andrew Thompson: Actually that is something that we talked about last Friday. I’m not exactly that, but the same idea where like every action is logged. And this will be a long ways down the road, right. This might be a year two thing if we even get there. But we wanted to have an undo system, so anything that was done can be undone, which means every action needs to be logged and every action needs to have a reverse action. So that would be right in line with what you’re talking about, where if you grab something and move it, it remembers it was there and you just did a grab. So if you hit undo, it goes back to where it was. It’s a lot of work, but, you know, it’s. Can I.

Speaker6: Interrupt here?

Dene Grigar: Alan’s got a leave, and I’d love for him to see what photos got before he goes. Can we do that now before Alan goes? Ali, can you stick around for just another minute or so?

Speaker6: Yeah, yeah.

Frode Hegland: Okay, so this is I am very, very happy that this exists. This is something that we went through yesterday. Please note, this is just an exploration, and we’ll probably end up changing every single aspect of it, but it makes me much more relaxed because it means that we have a thing to argue around. Okay, so this is not I went off and designed something with Andrew, and Adam went into it, and the rest of you have to follow. That is not what it means. I’m just waiting for Leon to connect. Is they on there? Connecting to audio. Is he there okay. Right. So that’s really important. So here’s the thing. It’s just an inspiration for us to argue around. And a key thing is that Andrew pointed out that the when you are what you’re saying now is a rough indication of when you’re inside your headset looking at the web page to access our stuff. Right. Normally on the bottom right hand corner you would have an enter VR or AR button. Andrew pointed out that’s ugly and we can change it. And that was like the biggest light bulb ever. My brain just exploded. So what we thought about is, and this is very much inspired by the thinking of seeing a lot of the Mac stuff last week. So instead of having a little button in the corner, there’s a huge spear that says what you can read here.

Frode Hegland: Tap to enter VR view and tap me again for controls. It’s really annoying. Fabian is not here because the next thing you’re going to see is very Fabian. So you touch that topic whatever and talk about. You even have a custom hand for this right from the little sphere wobbles about when you find your hand, it then goes onto your wrist as though it’s a watch. So that becomes the self-described self intuitive control point for almost everything. Everything that is in the gesture. In this case, you were reading this particular PDF on your desktop before you enter the space. That is why it’s here. Or that was the document you had last time you were in the space. That is why it’s there. So here there are two control things here. One is if you tap this, then you get this thing. These icons mean nothing. This is what I think if we follow this type of direction, there’s huge control screen is what we would populate with things that should be visible. I see this as being about half of our research. If we do this kind of thing here, it might say, take this document and put it as a mosaic on the wall. Basically, it’s keyboard shortcuts. The idea behind this is partly because we’re using Marius, the organizer of hypertext conference, as our user, just mentally to have one human.

Frode Hegland: He puts his headset on, he gets this thing and he knows now the control is here. When this opens up, it is not fancy. It’s not sci fi. It’s really, really basic. But it is almost like command keys because they become gestures. Because what happens is each target is quite large. So once you’re used to tapping on your watch thing and tapping into this space, if you become an expert user, you will do that so quickly. You may not even have this born and may choose not to render. So that’s a fun little aside, but. I’m just listening to you guys now. Why not have the gesture cheat sheets accessible from here as well? Sorry, my slide order was a bit odd, but what happened? There was a thing was clicked on in the view chain. So that’s the first half of this. The just to really bring the point home the first half is that inspired by Randall. You have all your controls on the back of your wrist. Now the next thing is what if you want to change library or change environment, but you want to do the opposite, right? So you long press on your wrist. And the sphere expands and you’re now inside it. Well, this is obviously an AI generated render, but the idea is that you have an entire sphere through 60 room with all your different libraries and all different sections.

Frode Hegland: You can choose them and your environments. And then you go back into the room that you were. So it basically means that your. This thing on your wrist becomes your. From your point of reference. Over. What was the prompt for the image? The spherical library. That was the prompt. So the reason I’m so when this these things came together and to make it really, really clear making going inside the sphere to be in the library that was Andrew said he absolutely loved that. It has been really stressful for me over the last few weeks with an infinite canvas, and now we have the notion of this thing being your point. So as long as we even have that intro screen when someone’s in there, unless they’re really stupid, which we can all be at times, you know, maybe even, as Andrew said, the if nothing happens for a while, you’re a little watch my pulse. Or we may even be annoying and give it personality and may even have text or speech saying, do you want to do something? Tap me. We have a thing to hold on to. I really want to hear from the comments. Thank you for the nice comments in the. In the chat. But yeah, tell me how wonderful and brilliant and horrible it is all at once. Please to give us direction. Come on, guys, pick up.

Speaker6: I’d love to.

Andrew Thompson: Hear I’ve got to go, but I’d love to hear what Brendel has to say about it. It seems very intuitive and like there would be a lot of.

Andrew Thompson: Options. And I don’t know enough about the constraints of the gestures, but it seems like you could have a dial. You could you could tweak the you could tweak the affordances. So, yeah, I mean, it seems like a pretty good. Base of operations.

Speaker10: Yeah.

Speaker6: Well, so I think.

Brandel Zachernuk: Having modality associated with or attached to the wrist is a, is a reasonable way of being able to engage those things. You know, it’s a question of the sort of as Andrew was talking about what kind of false positives you experience. I’m also conscious about chirality, the sort of the left right sort of ambiguity. So being able to have it on the right wrist would be probably pretty useful and not, not not requiring that it needs to be. Exactly. Only ever on the left. So, I mean, those are those are things

Speaker6: I think.

Brandel Zachernuk: I think it’s a good way of reaching things and to be able to kind of have a setup for a menu. The idea of a. It’s good to make sure that there’s enough feedback that people will be able to find things. And so generally, you know, having a long process that doesn’t have an indicated action similar to the like the discussion about left click versus right click and the fact that Macintosh got the one button so that nobody had to evaluate that having simpler actions over more objects can be easier for people to follow. But like, these are all things that will come out in the wash from being able to play it. So I think it’s definitely good enough to start with.

Frode Hegland: Thank you. Brandel. Now, this massive control center should have all of that. There should be a maybe, like a control box or system setting or whatever you want, where you can go in and swap your right left. That is absolutely important. And also the long press to make them go into the library is cute. But in this massive control center, there should be a button that says the library. So it really needs to be foolproof when we are still fools and it needs to be gestural when we become artists.

Brandel Zachernuk: Yeah, yeah. And I think there are reasonable paths through it. If you’ve ever used applications like Maya or Blender, they make real use heavy use of not just pie menus, but gestural pie menus, which is where you, you sort of right click and drag down or use sort of a quadrant based system in order to identify different sort of things. And so down into the left is is one specific action. You don’t need to wait for the menus to appear to be able to do it as long as you know, and you can have an unambiguous kind of signal. So those kinds of things are all kind of they’re not mutually exclusive to what you’re talking.

Speaker6: So cool.

Frode Hegland: So Rob is saying here we need user testing. Absolutely. We do. First user testing will of course be you Rob, and testing what Andrew just did in my headset here in this coffee shop in Vancouver, Washington, was another reminder why the vision will be an absolute stunning success. The amount of faff and setup just to put this thing on my head is ridiculous, right? With the Romans sitting in a coffee shop, I have to draw my space and all of these things. So yeah, we got to test, test, test. And I look forward to more visions being available to us. It’s clear they’ve got a huge competitive advantage. And in terms of testing, also really want to say that I think now that we are at least experimenting with this approach, the stuff in the massive control panel, what should be in it is going to be what we need to test. And that’s unfortunately Alan had to go, but that’s going to be very much the user stories. We have to put things in there that are actually useful for the academic. And finally, of course, for the coders to make these things possible will be a lot of work. But to change what’s on the menu and where should be trivial. So in the beginning, we’re probably just going to have text labels or something. I don’t know, what do you all think about that?

Speaker4: You’re muted.

Dene Grigar: I know, I know. I was trying not to be too obnoxious. All the pinging that’s going on in my lab right now. Some of the things I have written down for next steps, which I think maybe is a good time to talk about, to kind of start thinking about, is that we need to have the naming convention started. We’ll start that document. We’re also looking at gestural inventory and application. For universal use. And so with the headsets coming on Friday, they’re arriving Friday, we’ll be able to start having that conversation more widely. Right. We have the meta with the quest two, the three, the Pro and the vision Vision Pro. And I think then we also need to be thinking about design principles. And Andrew already brought up one of them. And and that is whatever the right hand can do, the left hand should be able to do. And as a left handed person, I can say I totally appreciate that, because nothing worse than working with a pair of scissors for a left handed person like me, right? I end up stabbing myself with it every time. So starting to set up this design principles, I think everything we’re talking about right now, the slide show, Andrew, that you have produced that needs to go on base camp. So I’ll start working on getting those asset organization in the base camp. Naming conventions will be applied to the base camp sight as well. Right now it’s been kind of hodgepodge. And the last thing I’ll say before Leon takes over is that one of the things that, you know, we all know working on projects is the first weeks of of kind of.

Speaker6: Just. Review. I’m Russell. And to. We.

Dene Grigar: Start making something experience. Even if you have getting that that tooling up. Okay. So anyway, so just just to say I think we’re on the right track. There’s a lot of work that’s left to be done. But I think we’re we’ve got some solid steps and I’ll turn it over to Leon. Thank you.

Leon van Kammen: Thank you. Danny. I had a small question. Concerning something what? Fraud said fraud. You said something about text and labels, which I did not fully understand. Could you maybe repeat that or elaborate on that?

Speaker10: I think he’s away from keyboard.

Dene Grigar: He’s. He looks like he might have frozen. Or maybe he’s he’s a Starbucks, so he might have frozen at Starbucks.

Speaker6: He’s he’s.

Mark Anderson: Taking comfort break. He’ll be back in a sec.

Speaker6: Okay.

Leon van Kammen: Okay. No worries.

Dene Grigar: Put a pin on that. Right back.

Mark Anderson: What one just passed you on? I’m also interested to see. It’s interesting seeing that we’re getting a sort of menu palette emerging in things. I’m in a sense as to whether we can also look at going past that and straight into decomposition of objects, because that seems to me the richest thing that offers text consumption. The danger is always slightly becomes like you put your document in Photoshop, in VR and do things to it. And that sounds a bit unkind of me quite that way, but it’s it’s a thing of just going straight in. The reason I think it’s interesting is because it asks questions about what that implies in the design of the documents themselves. As we cross, as we cross this sort of threshold in all sorts of areas, I think that’s useful because that also gives us indications to what we might need to do in the building tools. Leon.

Dene Grigar: Are you muted now? I don’t see a mute sign.

Speaker6: Must be sorry.

Leon van Kammen: Yeah, sorry. I yeah, that’s a good point, Mark. I was to extend on that. I was also thinking that the library which throat was showing, which is basically part of the sort of. World you’re in. Maybe it’s. I’m not sure if this is the right time to discuss it, but maybe it’s. It’s good to see whether we want the library functionality basically outside of us, which basically enforces VR or also somehow on the wrist, because that would also sort of allow for a AR or VR experience because like once it’s the library is outside of you. It, it basically forces the user into VR mode, which could be. Okay. I was just thinking if that was something Froud considered as well.

Frode Hegland: We have in the in the in the visualization. So far it’s just a sphere. But it could go in all kinds of cool shapes and environments. So yeah, it’s kind of expandable. Is that the aspect you were referring to, Leon?

Speaker6: Yeah, yeah.

Leon van Kammen: I was just thinking that something like the the outside library or the on your wrist library, so to speak, I think that is Yeah. I guess if you keep it in mind, then during implementation, then it will be fine. Then it can perhaps be both depending on what the user wants. So. Yeah. Yeah, thanks for that.

Frode Hegland: It is. There are many holes in this idea that is absolutely one of them. A question of when you are interacting with one document you may want to have the library in front of you, so that should absolutely be possible. So all we’re thinking about is making it really easy to do basic interactions for one document for a beginner. And then when you go into your library sphere, so to speak, you should have options for more advanced rooms. I think this goes back to Adams quite a while. Saying a library is not just a list of books, it’s also an environment. So you can I could imagine you even have the same book if you have many different rooms for it. So you go into the sphere and one of them is a reading space, another one is a huge graph space, etc. so this will allow us to do all kinds of explorations. Does that address what you’re talking about?

Leon van Kammen: Yeah, definitely. And you, you also said a word. A magic word. You said advanced. And what I, what I also really loved about I don’t know which Mac OS version it was, but in the old times they had a checkbox advanced and it was always unchecked. And the, the benefit of that was that all the dialogs looked very simple. It was so easy to use, and if you would press the the advanced checkbox, then you would get like more options. So I’m not saying that we should introduce such a advanced checkbox, but I guess it would be nice to basically first land into something very, extremely simple to just read. Not too much bells and whistles, but to also have sort of a way to for interested people to go beyond that.

Frode Hegland: That is really, really key. And please remind me if I’ve already told this brief story, but Bill Atkinson last week, he said he went to see Doug Engelbart. He was a fan of Doug’s work and showed him Mac stuff, early stuff. And Doug said, this is all well and fine, the ease of use, but make sure you also support the advanced users and build and develop keyboard shortcuts. So there are many ways to have a progressive user interface style, and that is absolutely key to what we’re doing now. We are using one human being, Marius, the organizer of the hypertext conference, as our persona, a real person, as a persona, he should be able to take on the headset, understand things, get a few things done, and get an idea that more can happen. But even, you know, time allowing. But even within this environment, we need to provide different levels of experience. One of the things that I briefly skipped about earlier is this huge, childish, almost control panel. If you become an advanced user in the system senses your speed, then when you tap on your watch, you point. You don’t even see that it doesn’t render because it becomes a gesture. You do this, you do that. So you have a special keyboard shortcut area, so to speak, but may or may not work, but we should as long as we support the initial user and we don’t. I know you agree. We should absolutely experiment with alternative ways to meet more of a wizard and flowing through the space. And as they said about Doug demo be dealing lightning with both hands.

Dene Grigar: Other comments.

Frode Hegland: It’s a shame Alan had to go because this was supposed to be user mapping, but I guess we need to talk to him because of his goodwill about where that would fit in other Wednesdays, but Yeah. Please, guys. Cockblocker.

Dene Grigar: Well, I have to say I really want to do them. We really want the presentation from Friday to be front and center, right? That that’s something we can’t change. So we’ll talk to him about how what his time frame is going forward. But there’s no discussion of that part without looking to see what everything else. What else is transpired? In terms of production. So.

Brandel Zachernuk: I think there are two ways of talking about and looking at what we’ve seen today. One is to go, what?

Speaker6: What?

Brandel Zachernuk: What can people who aren’t capable of sort of further development? Get out of using and thinking about what Andrew has built so far. What kind of use cases that can they evaluate, and what is that sort of lead them to? And then what are some other developments that that require? What are other things that require further development that are the most provocative and interesting as a consequence of it? So it seems like what you’ve got so far, Andrew, is a bunch of really good stuff for looking at. Pages. Entire pages of PDFs. It looks like you had some pagination being able to go next and previous. And so having the ability to critically assess or play with some work that relates to that would be cool. And to Mark’s point, you know another thing to think about at a technical level is like, obviously this doesn’t deal with document fragments, doesn’t deal with the relationship of the contents of the text and things like that. It’s just a representation, not not to belittle the work at all, but but it is just documents as images of pages. And so based on, based on that, you know, there’s a lot of stuff that it can’t do. And what, what what do we want to explore in there is, is a useful thing to do. And. That can primarily happen conceptually, but having some understanding of the technical basis on which it needs to occur, with which we’ve already done in a fairly substantially with the ability to understand trigger text and being able to reconcile the text fragments in a page. Those are those are useful things to, to be able to kind of spend some time on as well. But if there are really low hanging fruit that come from being able to kind of richly manipulate text fragments, being aware of what’s sort of semantically present within a document, then those are pretty awesome to be able to jump on and to be able to help guide people’s imagination to.

Frode Hegland: It’s a good question when we’re going to be able to start dealing with text fragments. I would guess speaking for you, Andrew and Adam. So please interrupt me. Once we have a good interaction, we’re just moving the rectangle PDF around. We can start with how to how to do that. But also the library itself will have a lot of text fragment opportunities as well, because it’s not just the title of the books. So I’d love to hear from Adam and Andrew when you feel that would fit, because that will be the really cool thing.

Speaker4: Yeah, it’s definitely a a necessity. We know we need text interaction. We haven’t fully agreed on how we’re going to do that yet. Unlike a base level, just grabbing text from the PDF not worrying about how it’s linked to the text around it.

Speaker10: That’s.

Speaker4: That’s something we could implement sooner. And then we can figure out how we want to work with it beyond that. But I think we also need to come up with sort of how. All of this text interaction works. I kind of see it as one of the tools potentially inside that menu we talked about Fred. Maybe like a text selection tool. Maybe that would be a laser pointer or something. Because fingers are rather large when you’re trying to pull text off a page. And also right now, any sort of pinch gesture is currently used by the system for moving the PDF. So this is definitely a, a discussion we’ll have to have if we want to add another gesture, if we want to switch modes. And now the current gestures do something different. If we want a tool that does like a selection and then a certain press will pull it off the page. There’s really a lot to to discuss before we even start implementing any of it.

Dene Grigar: I mean, I was going to say, this goes to what what Allen is going to be working on, and this is what academics need to read and write text, right. We’re not there yet. Basically, what we wanted to do is just get the stuff in the headset and be able to do things with that stuff, like just manipulate it. So that was step one, and we’re showing that this is possible. Now we need to see if we can do what we can do with Vision Pro and how it’s different from the other headsets that are out circulating. Right. And then we’re able to say, okay, now, you know, how do academics want to use this? I mean, this is how should they or how will they want to use this? So that’s that’s the next phase, right? Absolutely. Mark. I didn’t mean interrupt you, dear.

Mark Anderson: No, not at all. I’m just all for ourselves, giving us the space to let those doing the design push back the other way and said, okay, well, if you wanted to be able to do de-construct the document to interact with things it would be helpful if it was essentially marked up, constructed in a way. So rather than anything as pedestrian as sort of indicating bits of a PDF, because eventually that’s a legacy format that will be with us a long time. But it’s going nowhere fast. But if we think in terms of what actually we use to create our current documents then that’s extremely powerful because you actually go back to the initial documents and you’re authoring essentially the necessary metadata right into the document. So that’s why I think it’s interesting to know from those who are actually, you know, like we’ve seen with them in the experiments today with, with the gestures. Okay. So, you know, what sort of things could you do? And therefore if you have that range of gestures, what sort of granularity could you play with in the document? You know, what sort of handles would you want, as it were hidden in that document for you to be able to do the deconstructive work? Because whilst on the face of it sounds like just marking up, you know, this being a picture or this being a graph, that sounds very simple, but actually it’s a lot of work that’s going to be done, or rather not done by somebody else. I’ve sort of seen that sort of dead ending happen quite often with stuff. It’s well intentioned, but it goes nowhere quite fast because of the that hump of unseen work. But whilst it won’t affect things we have now, I do think, you know, for the reasonably near distance, if we’re, if we’re thinking, well, what would, what would our document need? In terms of its structure for the, the, the design that’s being done now with the XR to be able to to be able to, you know, to effectively put the handles in it. That’s sort of really where my mind’s going.

Frode Hegland: It’s a shame again, that Alan is not here for this. And we need to organize the time for for him to be here. Maybe we’d send him a link. But can you not hear me? That someone.

Brandel Zachernuk: We could. I couldn’t hear Danny if she was trying to talk to us. She she did talk.

Speaker6: Oh. Oh, were you talking?

Frode Hegland: Danny, please.

Speaker6: No.

Dene Grigar: It’s okay. I just wanted to say that Thank you. Mark. And I saw that Peter had put something in the chat that was. You should probably elaborate on a little bit. I’m going to keep my mute off, because it’s much easier for me to remember to turn it back on.

Frode Hegland: Yeah. We should look at what Peter says in the chat. But, Mark, on that point, this is exactly what I expect the user stories to be for, for user story mapping rather. So I hope we can come to an accommodation where Alan does the right kind of timing for the group, so he does not believe during what is currently allotted. Yeah. Please go on. Peter, what do you mean by that? That’s in the chat.

Peter Wasilko: Well, I’d like to be able to have the line spacing in the PDF dynamically expand, and then be able to bring in an interlinear gloss, providing extra information, or perhaps have a 3D projected call out. So maybe you just have a highlighted section, then have a line angling off in 3D space to an adjacent floating panel that could have the additional supplemental information, and whatever that content would be would be completely context dependent. But just overall, the notion of having an affordance that allows us to have a region of marked text and then pull out a bubble containing. Supplemental material should be a core affordance of the system.

Dene Grigar: Can I mention? I love that, and what I’m going to do now is start a new dock in base camp. That’s called wishlist. Peter, and you’re going to be first on the wish list. And I’ll make this link available in the, in the slack channel two. So you can just click on the link. And if you want to add anything to the wish list, let’s do it. All right.

Frode Hegland: That leads us to the naming convention straight away, actually, because documents in base camp, when we see them in the kind of little folder windows, it’s really badly broken over lines. So if we add something to the try to make it 1 or 2 words just to make it neat, it’s just a suggestion. It’s not a not a rule. Mark.

Mark Anderson: Yeah. I just thought I’d put because it to save me writing a sort of essay into the chat, just quickly add a little extra background to the observation I made earlier. So I spent a lot of the last couple of days basically putting information from papers. Well, the text of papers into Gabo’s system, into which in its original inception is a chat system, so works at small block level. So trying to put long form documents into it turns out to be extremely hard. And things like worded footnotes go and that’s really got me. This is why it’s got me thinking about the construction of documents. And rather than rather than basically trying to, you know, do things to, to systems that can’t really evolve anymore effectively. Our print systems are what they are and they don’t lend themselves to to much improvement. But knowing that we now we now have the ability because they’re digitally native documents to, to basically strand things because it’s just chicken egg. Well, if you had a document that had this sort of structure in it, what are the fun things you could do with it in XR beyond just look at the letters on a page.

Mark Anderson: Which is fine, and it’s good if you’re sitting in a train or something, but it doesn’t actually really move the world forward much. But the ability certainly for an academic document to be able to deconstruct it, to pull out the threads, you need to pull out the the underpinnings and cross link and if necessary, Transclude information through is amazingly powerful. And in the malleable space of XR it offers a lot. It’s difficult because our primary consumption document is is basically not fit for purpose, but I think that doesn’t matter. We can do stuff with that and we’re doing stuff. But in the parallel, I think where we can really show, I think we could show some really interesting stuff to the the funders. It’s just giving a glimpse at showing, you know, how we’re limited by what we’ve got, whereas what we you know, what we could do. Primarily I make this point primarily because I think it really it also affects things like, well, you know, we’re probably not going to use use word much longer or it’s not going to look like it does at the moment.

Frode Hegland: Right. So obviously I agree with the issue. And obviously that’s why visual meta is a stopgap. But I know you agree on this, Mark, so I’m not arguing that. But what I would like to raise is the difference between the author actively doing this, because I think that’s going to be a bit of a churn to get the author to, to make these richer kind of multimedia documents in a way. So but we should allow them and we should support them, no question. I think it’s a different thing to take the metadata that is in the manuscript and retain it. That’s obviously something we’ve discussed. And I think the distinction is really, really important. Such a simple thing as headings. The author shouldn’t have to manually make headings if they have headings in word. Why in the world throw them away? So that’s why an incremental discussion around this is really, really important. And I really would like the author to be able to draw rectangles on the PDF and say, here should be richer stuff. Bundle is a bit wary of that. I don’t think that’s going to be the winning solution. I agree, but it might be something that some advanced authors want to avail themselves of. So we should try many ways for the author to control how they express themselves, so that it becomes possible for oh, it’s gotta go. So it becomes possible for the reader to do what you’re saying. I believe that is absolutely core. I believe that is what this project is. So once we have the infrastructures for the stuff we’re talking about maybe, Mark, you add a base camp document introducing this so that we have somewhere to put our thoughts. But.

Dene Grigar: Hey, folks, can I ask you a favor? Can you’re refresh the agenda for today? And look at next steps because I’m adding next steps. Here. We’ve got 15 minutes left, and I want to make sure I get all the next steps on here. And there’s something I’ve missed. Please let me know. And if you think of something after the meeting, please drop it into the slack channel. Yes, yes.

Speaker6: This.

Frode Hegland: This is all very nice and perfect. Thank you. Danny, I think maybe what we should do in addition is write names next to the tasks. So if you. Yeah. Fantastic. If you do that the.

Speaker2: And I would be interested. Yeah, interested. Until we have a complete the design. You user stories. It would be nice to have a to hear what you academics in the group imagine that this could do for you. Just with your words, what you would like to have from it. Because I’m, I’m just a general reader. I read books, I read PDFs, I look at some, some papers, but I usually usually skim them. So I’m not an academic in that sense. And I it’s not part of a bigger, bigger puzzle. So it would be very interesting to just hear your view 1 or 2 things that you really want to do in this system yourself for your academic reading that would inspire me to design very specialized tool for it, not general. Tools, but really specialized tools for academics.

Dene Grigar: So that will be part.

Speaker6: Of the add to what.

Frode Hegland: I.

Speaker2: Wish list is. Yeah wish list. But I want wish list for from academics. Yeah. Not just everyone. Yeah. Yeah.

Speaker6: You I just.

Frode Hegland: I just want to amplify what Adam was saying. And I think maybe what we this is something that I’ve wanted for a while as well. So how about we have a base camp document that is what we call it, something as simple. And it’s a long name going against what I just said, what I want as an academic. Right. And you can either write it in there or you provide a link to somewhere else. You write it, it goes, you know, for instance, Marx click on a on a citation type of thing. All of those things should go in there. We should definitely have a I’m an academic. I want this document. How should we do that? What is the format they need for noting those things that.

Dene Grigar: Well, I was thinking that as a wish list from academics. So that would be what Mark and I can get together. I think Mark and I would have fun over this conversation. We’d love to email each other long emails so Mark and I can get together and talk about this and develop something. But that’s our that’s our wish list. I think everybody has their own wish list. So ours will be the academic. And then I also wear another hat. Like, what do I also want to see? But know that that this is something that Mark and I can put together.

Speaker6: Absolutely.

Dene Grigar: This was on my mind. So yeah.

Mark Anderson: I mean it is amazing just today, having spent almost all day just porting information, the the biggest thing I notice and it’s accidental is just having it in better text, better laid out. So ripping stuff out of. That format and putting it into something that I can put at a comfortable size, at a comfortable page width. To the extent that I find myself seeing I’m basically being moving papers, I’ve sort of read somewhere in the past, and I’m actually getting new insights because I’m actually reading them at a comfortable size and a and a and a and a comfortable spacing. You know, instead of being crammed into, what, a crazy print system it was before. And I understand the backlog to it. But I just say this as a genuine note of surprise at how I found myself essentially finding new richness in the in the stuff I read before. And why has that happened? Because I’m actually reading it in a readable format or a comfortable format for me to read. Yeah, I wouldn’t have seen that coming.

Speaker6: I’m making. I would like.

Frode Hegland: To show you guys something. I’m just going to show you something really quickly. Hang on. I’m trying to do share screen here. I think you know this, Mark, but just for the sake of argument. So here I am. I open up HyperCard guide, I go into page. Now this is quite well written, but. As. So I have this. This is using liquid in this and it’s not always appropriate. And I’m not saying this is the most readable way, but to when you need it to be able to customize your own reading space is very important. So to do this in an environment where you say because. And you know, what I showed here isn’t going to suit everybody. But if you can design your own breathing space, you say, I want this spaced out or whatever. Let’s call it like a reading palate or a reading eyeglasses or something like that. I’m surprised to hear you really highlighting this, mark, but that’s really, really nice. And please comment.

Mark Anderson: Well, I can show you very quickly if I I should know I’ve shared the wrong thing, shared the right screen, which is that one. Are you seeing this screen?

Dene Grigar: Yes. Well. No. Yes.

Speaker6: No. Yeah.

Mark Anderson: So literally so this is Minter, this is Gabo system. And I’m basically moving to one of David Kolb’s papers into it. And it is you know, it’s interesting for instance, so a block system has no notion of footnotes. Where do they go? And we’re playing with that at the moment. But I’m thinking of a different deconstruction, the one that Freud showed, which is really interesting, but I think answers a different reading. Case, this is more that so one thing we have here is we got block level addressability which is also important for other things you might like to do. But basically this, even without lots of extra affordances, I can because it’s essentially it’s now HTML under the hood. I can do an awful lot with very easily just in terms of tweaking things. And I don’t have to go through such a large Transformers was just just shown. I don’t think the two are in opposition. I think they, I think they solve different problems. But anyway, I just thought I’d show you the screen because it’s. Yeah, it’s been quite a revelation. Anyway, I’ll stop sharing.

Speaker6: I mean.

Frode Hegland: We have an infinite losses, and this is definitely something that academics will need in different ways to tear the text back into semantic value, away from the prison of the two columns and ocean. So, Adam, a lot of people in the academic community want to be able to do this kind of stuff. So I really look forward to experimenting along these lines. By the way, just hang on. Just super quickly. I had forgotten I do actually have this built into reader already, so just select whatever arbitrary block of text you want or the whole thing spacebar and it comes up. Sometimes ligatures are annoying. We have to fix that. But it is. It is there. And to have something as easy and in XR. How amazing would that be to the preferred style of the reader? Now an important corollary to this is this obviously helps accessibility. So as we experiment with this, it will help users with different visual needs. And that may help us getting grants to support that. And it’s something that everybody needs. They may just need a very special color set. Right. Theni.

Speaker6: I was making notes.

Dene Grigar: I was making I was making notes. No, no, no. Go ahead, say it again.

Speaker6: Oh, no.

Frode Hegland: Oh, sorry. No. I know you’re writing about this. No, all I was saying is the kind of thing that Mark and I are talking about. Not the same thing, but similar. These are the types of things that will also help users with special accessibility needs. So the difference between somebody with quote unquote normal vision and somebody with different vision is a continuous line. It’s not either or. All academics can benefit for having different views of their text. That’s something I think we agree with. But because it can also help with accessibility, it means that we implement some of it. And lo and behold, we may get grants for that as well. Does that make sense?

Dene Grigar: Yeah, there’s a lot of money, a lot of NSF money, for example, in developing systems for people with disabilities and sensory sensitivities. So there’s I mean, I. Yeah, absolutely. I think the design principles too. There’s a lot of things we’re we’re innovating right now we’re talking about that is fundable. So thanks.

Frode Hegland: We are in partly in the world of funding now, so we’ve got to keep an eye on that.

Speaker6: Peter. Peter.

Peter Wasilko: Yeah, I think we should also consider programmers and software developers as a special subset use case.

Speaker6: Yeah.

Mark Anderson: So it’s going to make a comment that I mean one thing I’ll try and do some do some input on is the fact that actually different disciplines write very different documents. So even, you know, this idea of academic reading is actually not a one size fits all. You know, it’s a big difference between humanities and the sciences and even in the sciences from the research project that I’m involved in, that’s a lot of information. Well, exactly. So they’re actually they’re almost completely different things apart from the fact they use text. That’s almost where the beginning and the, the, you know, they may look even look similar, but they use a massively different way. And their, their, their their, their structure as captured in their typographic structure and the semantics are derived from the typographic structure are wildly different. I would say that I think, I think people are massively underplaying the problem of, of, of corrupted texts. Another thing I found when trying to transpose text is that unintentionally, the many helpers, layers of helpers we have built into modern systems. We’ll try and help by, for instance, fixing things, by finding dictionary words which are not matches for the things that they partially brought. And this this is interesting. I mean, it’s taking me going as fast as I can. It takes about two hours to transpose a paper cleanly per paper and remove all the artifacts, basically all the lossy, all the lossiness that’s come out of the source document.

Mark Anderson: Which is surprising and disappointing. But this is why it says to me that I just don’t think that we’re, you know, going to find magic systems to fix this, or our legacy stuff is pretty broken. It is what it is. But it’s why there’s a more pressing need to not keep producing bad documents. And have something where we, you know, we actually have access to the structures we need. That’s that’s the need, I see. And this links across to the fact that it’s really interesting project I was at the end of last year, the last week there trying to look at reuse across the hard sciences as it happens. But even there, there’s there’s no commonality. And everyone is thinking within their vocabulary pool and their community of practice. And yet what they want is, is wide surface search. And the answer probably is open data. So maybe it’s day has come. And there’s a really interesting element there in terms of how you start to build that in. And I don’t think I don’t envisage authors dealing with that. How I envisage this working is that we, we move from talking about human in the loop to talking about computer in the loop. And what the computer does is to keep track of all that complex stuff for you and present it to you with with things to either review or choices to make. And it does the stuff it understands really well and leaves us sorry.

Frode Hegland: Denise got to go. So that’s where I’m interrupting you. Mark, I agree with what you’re saying very strongly. I just want to say for this particular Sloan project we can’t accommodate everybody. We’re accommodating academics, specifically ACM, reading academics. So we’re not going to go into other things apart from explicit side issues. If someone wants to do something, either for lawyers or for different kinds of academics, we can do that. But the core deliverable is this particular user set. That’s all. See you later tonight. Any.

Speaker6: Yeah.

Mark Anderson: I was just saying for Adam. I mean. Sorry, Mark.

Frode Hegland: It’s just you had to continue. Yeah.

Mark Anderson: No, no no, no, I was I was thinking, really, the thought was I wasn’t suggesting that, you know, we widen the scope massively. If, heaven forbid, I was actually just thinking to try and circle back to the question that was, I think it’s sort of implicit in what Adam was asking earlier is I’ll see if I can try and actually dig out some documents that give you a sort of view of the wider piece. It’s just interesting to see how they differ. Because to a certain extent it’s not that we’re trying to do something for a particular field, although we’ve, you know, picked the ACM stuff. It doesn’t really matter. But but it’s I think it’s instructive just to see how wildly different something that’s ostensibly similar can be. Because I think that helps when you start to think about the deconstruction, essentially. Otherwise you overfit for a particular problem, and then and then you just design in new problems.

Frode Hegland: That makes sense. And unless there’s something really, really brief, we are done for today. Please keep going in slack. And please understand the base camp thing, which Denae has kind of pressured upon us on a very, very good way. It’s an amazing place to to put thoughts and ideas and whatever. Okay, so at least now we have our blog, so to speak. And thank you guys for today. Any questions or comments before you leave? Rob, I think you wanted to talk about a clearer vision. Things. I’m happy to stay for another few minutes if you wanted to do that. But I know it’s very loud where I am. Of course. Also.

Speaker6: Okay. I can’t hear you.

Peter Wasilko: I mean, I can’t.

Rob Swigart: Hear noise there, so it’s okay.

Frode Hegland: Yeah. Just just briefly. Yeah. Are you leaving?

Speaker6: It’s a.

Frode Hegland: Mark. Remember? You are. Oh. He’s gone. That’s fine. It’s just going to emphasize his academic credentials in terms of what is writing up. So Rob, the privacy issue with the headsets I don’t really care about myself because that’s even though Apple is not exactly an angel company, privacy is one of the things they’re strong on. So just the fact that it scanned the room doesn’t mean it will be used somehow. And I do not think someone’s going to be sitting with a Vision Pro at a dentist office waiting area any time soon. It’s an interesting thing, and I really look forward to your experience getting it and comparing it with our experience and see how it goes.

Rob Swigart: Me too. I’m not. I’m not too worried about it. It was just a. The sideline. Yeah, it’s an issue that came up. The press is starting to cover the Vision Pro now, and we’re getting. Views and. Objections. So.

Frode Hegland: And the press wants stories. They want touches, of course. Leon, how are you? We haven’t heard from you for a while.

Speaker6: Hang on. You’re good.

Leon van Kammen: I’m good. I was I was trying to find my mouse. Yeah. I’m good. I I’m pretty busy, but good busy. So I have bookmarked also a video of the Vision Pro because yeah, there are some sort of in depth reviews, so. Yeah, I’m gonna I’m gonna look into that as well. I saw a little bit and so far what I noticed is that the, the, the mouse hover or the mouse over is back. And this is something which is interesting because you know, during the you know, the internet website heydays or the high, high tide, we could have these nice mouse overs to see where we, we would go to. And we sort of lost that when we got the touch screens. And it’s very interesting that now suddenly our eyes are the mouse over. So that’s I would say, a great thing they brought to the table.

Frode Hegland: Yes. And another thing that I noticed recently in one of these preview videos is if you have a mac screen in the middle or wherever, and then you have iOS apps on the side and maybe a native vision app on the side, if you use a keyboard and trackpad, if you use a trackpad or a mouse, you will have a little pointer. So just like continuity on the Mac, you can drag your cursor from your Mac screen over to your iPad and back. You can do that so you can be sitting in a completely trackpad environment and control all your screens. Just by moving back and forth. You have that beautiful little dock that you’re talking about that’s exciting.

Speaker6: That’s clever.

Frode Hegland: Oh, yeah. Just more stuff to buy for the headset. Gotta have a keyboard. It’s got to have this already. I ordered it without prescription because I couldn’t get the prescription in time, so I get $100 prescription. Go on to order the insert. Oh, no, no, you need a different light seal. That’s another $200 before you can order the prescription lenses. Yeah, that’s a bit much, but it’s still it’s revolutionary. In ten years, we’ll look back on this and say, wow, those were clunky early days.

Leon van Kammen: Yeah, it sounds a bit like you’re buying your first SGI graphics computer. They used to be enormously expensive as well as, as I was told.

Speaker6: Yeah, I remember those days.

Frode Hegland: So. Okay, we’re five past. Unless there’s anything else, we will adjourn. And I look forward to seeing you Monday. I’ll be back in the UK, and it shouldn’t be super loud in the background. Bye, guys. Have a good weekend. Weekend? Bye.

Leave a comment

Your email address will not be published. Required fields are marked *