Frode Hegland: Hello? Adam’s phone. Hello. Hello. But. So here we are. Hello again. I’ve been sorry about that.
Dene Grigar: It’s our monthly agenda. I’m eating breakfast, everybody. So I apologize.
Frode Hegland: No, no, that’s nothing to apologize for. Okay. So. Andrew’s coming up, of course. Right. Put a link to the agenda in. So we can start doing things. So Fabian Adam has got more time now going forward, which is why he’s here today, which is excellent. One of the things he needs to better understand is what he actually wants to do. So I think the things on the agenda today will really help us all better understand where we’re going forward. I’m extremely happy with what Andrew has done for today. So that’s a good start. So we I believe we can now critique his work more in detail and give more directions because it does more. It’s a really, really nice day for that. Other than that just here on the. Okay, well, why don’t we just start with the more European friendly daytime Adam, one might. Because no one in America needs to be in on that. As in, we don’t need to wait for Andrew. Do you have any thoughts and suggestions?
Adam Wern: Yeah, well, for one we could decide on that offline. But who is going to be there? And what is the kind of work we’re doing there? For example if, as I said before, if we if Mark and I’m and I are doing kind of a separate the reference thing under the with the Sloan hat on, but doing it as a separate component. There could be well, it could be you, me and Mark or me and Mark doing that. For example, if it’s about refining the overall design. Or Or is it. Yeah. So it could be. It depends on who what project. Here. There are many parts and it’s good to know what parts.
Frode Hegland: King of Mark. And here he is. Hi, Andrew. Hi, Mark. We were just talking about one of the items on the agenda, which I just linked to, but we haven’t skipped ahead. I mean, we haven’t skipped anything, but maybe we do think of that offline for the Europeans. So we’re not waste time trying to organize that now. Right. So on the agenda there’s vision and JSON and stuff. First of all, the reader library now exports a JSON. That should be to Andrew’s specification. Andrew, you haven’t had a chance to test it yet, right?
Andrew Thompson: Not yet. I’ll take a look at it today. I did like a rough sort of skim through just looking at it, and I noticed there’s duplicates. Which seems a little bit strange, but I’ll see if it still works.
Frode Hegland: Okay. Yeah. Just write down any issues and, you know, I’ll send it back. One thing. Fabian asked by yesterday is, does this have spatial information? And no, it doesn’t. Reader traditional won’t create that, but we desperately need to get into one. That’s going to happen. So that was just a quickie. And then I’m going to show you three slides because I think I’ve solved a problem. That was a problem for me personally, which I think has a strong relevance to this part of our meeting. If you can’t see scream. So I really had a problem with how to consolidate different views of this stuff. This goes into what we talked about on Monday, the whole how you view journal and stuff. So I’m using author as my example here, but this can be generalizable. So what I’m planning to do is can you see the little bar at the bottom there? You can okay. Currently we have right slash map. That’s what is an author. Now it’s a nice toggle. But then we have many ways of describing an outline. It’s a folding thing and all kinds of things. So what this is is to unify the Mac version, iOS version and vision version because it’s actually very complicated. So now you can see we’re in write mode because that is bold. And that’s what it looks like. If I now click on focus on the right it does this. That means that the thing on the right determines a specific setting for whatever main view we’re in. So that’s clear. If I now click on outline the options on the lower right change. So this is how you would fold into an outline.
Frode Hegland: But I’ve been inspired lately to really make things clear for the user. So instead of a keyboard shortcut you can click outline. You have an outline. Now let’s say that you want to do one of the special views, such as saying outline plus names. Click on names and they appear. All of this is very, very obvious, right? Good. So the final one. Is the map view that we have. So now at the bottom of the screen we have two controls for the map view tags on the left and layout on the right. These are all there now, but their keyboard shortcut hidden. So the notion is that you can quickly turn on and off the different types of categories of things on screen. And on the layout side you can very easily align left, center, or right. Lay it out horizontally, vertically and so on. So the reason I’m telling you this is the notion of the outline for the journal that we discussed a little bit on Monday. The question is, how do you go from the reading mode or here the writing mode into that kind of view? And I think that maybe even in our webXR system we use these three. So when we’re in reading mode, in what Andrew is building in our little control bar that comes up, there’s three modes. One is read, one is outline, and one is map. Because having an outline expand and collapse in a map are very different things. So to me, big deal. I don’t know about to you, but does that make any sense or clarify anything? Stunned silence.
Dene Grigar: Well, I think just a I can’t speak for anybody else, but for myself. You are the developer of this software program. You know it better than anybody else. And And even even the bit of time I’m using it. I’m still not used to shortcuts and things like that. And so watching you, it looks so simple. But then when I try to do all these things on my own computer, I struggle because I don’t know all of the shortcuts and things like that. So it looks great. I can’t repeat that activity.
Frode Hegland: Oh, but notice I didn’t use any shortcuts.
Dene Grigar: That you have a whole list of shortcuts but your. But it’s different than anything I’ve ever used before and I’ve gone. I mean I started with WordStar, WordPerfect, you know, Clarisworks Appleworks Microsoft. The thing is.
Frode Hegland: The thing is it’s taken into account exactly what you’re saying, because at the bottom of the screen there are these three tabs. So the point really is that whether when an XR environment or here, you’re quite literally if you’re in the right environment, you click on outline and now you’re on the outline. So you can still use keyboard shortcuts. But for our webXR, we don’t expect people to use a keyboard a lot, so it has to be available there in front of you somehow. So that’s why I have actually, it’s funny and I’m grateful you’re the one to comment because I’m actually designing this now with you in mind. You are my number one user. Because you are clever, but you are not used to this software, so that’s why you’re a great target audience for me to think about. So that’s why you just click that options on the screen. And that’s all it is now. In this view, there are lots of keyboard shortcuts, but they don’t. You don’t have to use them. They’re just listed here so you can learn them. But the primary use is just to click and choose one for these options.
Dene Grigar: From a usability standpoint, I can’t see the menu at the bottom very easily because it’s so light. Anybody with a vision anybody with vision issues are going to have a hard time looking at that very light text.
Frode Hegland: Do you find that when you use author on your computer natively? Yeah. Well, okay.
Dene Grigar: Yeah. I mean, I read a lot of blogs too. And there’s a, there’s several blog styles that use that light text, a light gray, which is really pretty, but it makes it really hard for, for folks to read, especially main. And we do a lot of usability testing for all the things we’re doing for the next. And that would not pass our usability test.
Andrew Thompson: Right. One thing I could Pop in and say about that? Perhaps I don’t know how difficult this is, but generally, changing a color palette is not that hard in software. Having color palette options, leaving this as the default. And then, I mean, everybody likes a dark mode. I know reader and dark mode would be weird, but like, everybody likes dark mode. Maybe users would just be excited to have like some new color palettes. I have, I have no idea. But it could be an interesting thing to implement. I know pretty much everyone is jumping on that bandwagon right now. You could, of course, have a high contrast mode, which is what I was getting at. Oh, you’ve got dark mode.
Speaker5: Also the text.
Frode Hegland: So here is by contrast dark mode.
Andrew Thompson: Nice. Is it? Is it just those two pallets that you’ve got? But that’s fabulous. It’s already there.
Frode Hegland: It’s not actually there’s a gray and a warm one, so.
Andrew Thompson: Bertini may be one of those would would be the contrast. You know how.
Dene Grigar: Much I hate dark mode. I know.
Andrew Thompson: You do. Do we have a, like.
Speaker5: Background person that doesn’t like dark?
Mark Anderson: No, I, I gave it up almost instantly after trying it. It’s a it’s okay. But the only thing it’s good for, funny enough, is coding. And I think that’s why we have so much of it. Because people who write code all day make apps. So assume that people who don’t make apps want to read what they see in their IDE. I mean, I assume that’s basically what’s what’s happened. There’s a it’s you get more eyestrain with, with with dark mode which is ironic because half the idea is it’s sort of less in-your-face. But yeah. And the interesting thing I found writing there was a document recently in author, which I did. Two things I, I realized is there is a difference between the UI you want when you’re writing as opposed to when you’re reading. And this is one of the failures in the sense of, of Wysiwyg word processors, because things like one of the first things I did was I changed the heading font to something that was clearly not. Body text because I was in the, for instance, in the slide we saw earlier of the names, I find it very difficult to tell without staring at the screen what was a heading and what was a name, because it’s all in the same color.
Mark Anderson: It’s all in a broadly similar font. There isn’t enough visual distinction, but there are actually two. There are two distinct elements on that screen. You’ve got the headings which are showing you where the thing you’re after. In this case the names occur, and then you’ve got the names themselves, and those are insufficiently clear. The thing is, it looks it looks visually appealing. But to work with it’s actually really hard work. Yeah. And so it’s one of the things I what it meant, the main thing it made me realize is what Orson lacks is a reader mode. It needs a writer mode and a reading mode. And you want your font and your you need. Actually, I think you’re quite visual. Different structure. Because when you’re writing or if you’re writing anything more than just, you know a prose novel or something, there’s quite a lot of structure going on. And the, the author has to juggle that as they go. And if it’s all just a mélange of text in slightly different sizes, it’s remarkably hard to tell where you are. Yeah, it looks nice. It’s just it’s it takes twice as long to get work done. That was my my experience.
Frode Hegland: Absolutely. Just to address the collar issue we do have different themes and author. When we are testing envision author, it’s their dark mode in a slightly lighter dark mode, to me at least, is much better than on a screen like this. So I think that definitely needs to be part of our testing of different themes for different things. But the key I just wanted to show you guys and ask about particularly udeni, was the notion of having where we’re going to allow the user to change the kind of view that they have. That’s obviously going to be very important. And what we do with, with our webXR system too. So I thought this was a useful thing. Right. I’m highly on. So then we have moving on here a little bit. Yeah. Future of Tech social 31st of May. 3rd of June. Some of you are coming. Danny, if you suddenly have a reason to be in the UK, please be here. Any other announcements?
Dene Grigar: Yeah, I made an announcement card for the for the event, and I didn’t hear back from you about it. So shall I drop it in the chat and let people see it?
Frode Hegland: Oh, I immediately replied.
Dene Grigar: I didn’t see anything from you.
Frode Hegland: Oh. And slack. Yeah. Oh, no, I said I probed it. Yeah. And of course, put it in the in the chat. Of course.
Speaker5: Let’s see.
Dene Grigar: I’ll drop it in the chat. Yeah. I’m not seeing anything.
Frode Hegland: I’m finding slack quite confusing. The fact we really should be using the reply and threads more.
Mark Anderson: Is there a way to enable a reply in threads mode just as a default, I mean. That’s a bit. I always end up replying in the wrong place because you normally reply under the thing you’re replying to, but in slack you reply in a different, in a different place, which seems highly counterintuitive. Well, I mean, for someone who doesn’t have hundreds of hours of use under the belt.
Dene Grigar: I’ll look and see. I don’t know if you could set it up that way, but I can look and see.
Frode Hegland: Yeah. So he sent it to me as a direct message and I say, perfect. No further comments.
Dene Grigar: But my one question to you, I think in response to that is do we want to call it Future of Text? I say in the in the, in the body, if you look at the body, it says invited Vint Cerf, Rhoda and Dini from daylong symposium discuss the future of text and textual experiences of virtual, augmented and mixed environments. And I also need to know what Vint Cerf s affiliation is. I need that for the other information.
Frode Hegland: Yeah, I replied that to the other message. Yeah. Google.
Dene Grigar: Is a title. Which is title VP?
Speaker5: Yeah. Okay.
Dene Grigar: Any particular like VP of what? Innovation or.
Frode Hegland: Is that something you want to put in the card?
Dene Grigar: None of the card it goes in the travel information.
Frode Hegland: What, Vince jobs. Okay.
Dene Grigar: Well, let me show you. I’m gonna drop this in here to everybody. Once again. Here’s this document. This is the travel information. If you let me do a screen share. Let make it easy. So Holly and I have put together. My lab is putting together the travel information because people are coming in from out of the country and various places that need to know how to get to Vancouver. One of the things I have found in putting on for other events here in town is that sometimes people book a flight to Vancouver, Canada. Yeah, so I want to make sure everybody knows where they’re going. So join us for a day long symposium, blah blah blah, day and time. Saturday, 9 to 5. The Murdoch. Host co-host Vint Cerf.
Speaker5: Google. Como.
Frode Hegland: I don’t think we need more than Google. You or me don’t have more than company affiliations either.
Speaker5: That’s fine.
Dene Grigar: And then for more information, go here. The location. The event takes place on the seventh floor of the Murdock, located at the West Columbia Way, Vancouver, Washington waterfront just across the Columbia River from Portland, Oregon. A vibrant area, a walking distance from restaurants, bars, and coffee shops, the waterfront also offers majestic views of one of the most beautiful rivers and areas in the United States. Travel participants joining us from outside the region can fly into the Portland International Airport 20 minute ride to downtown Vancouver. That way they know they’re not going to Canada.
Speaker5: Yeah, there are many.
Dene Grigar: Hotels and I list hotels here.
Speaker5: Okay.
Dene Grigar: Anything else I need to add to this? So, Mark, you coming in, let’s imagine you’re coming in from London. You know what airport you’re going to. You know where to book. Book a room.
Mark Anderson: Something that. Well, unless it’s got complicated to do in a way, since I last looked, a really useful thing to do is to just make a Google map that basically just pins the hotels. I know it sounds silly, but someone who’s always done maps ever since I was, you know, first went to sea, I guess is the first thing I do is I make a map. There’s the airport, there’s the venue, there’s the likely hotels. You know, is my hotel the wrong side of the worst place in town? So it’s just silly things that that if you just if there’s just a rough map. Because I’m amazed at the number of people actually who I envy their provider, they just turn up at the airport and thinks, now how do I get to the place I’m going to? And there’s normally a bit of work, so this is really useful. Thanks. I’ll do that.
Speaker5: Thank you.
Frode Hegland: I think that extra work would be useful if we’re hosting it in Rome, like last year for the hypertext. With all loving respect. And I love Vancouver, Washington. It’s about the size of Southampton. Campus is not very big. You’re not going to end up in a hotel way out of town. The obvious hotels are within walking distance.
Mark Anderson: I understand that what that overlooks is you don’t know that till you get to Vancouver. And even just saying that, the point is a map shows you in a way that anyone can instantly get to.
Speaker5: Mark, I’m going to do that.
Dene Grigar: But I do want to clarify. Vancouver, Washington is half a million people. It’s very big, but downtown is small.
Frode Hegland: Also also in this document, Denny has written that she says there are many hotels located within walking distance. These are within walking distance.
Speaker5: I’ll make a.
Dene Grigar: Map. That’s good. That’s. That sounds.
Speaker5: Great. Okay, so this is.
Dene Grigar: Ready to go. Imitation is fine. And that means photo. You can send it out.
Frode Hegland: Okay. No, but, I mean, you’re putting this on a web page, right?
Dene Grigar: Well, this with the invitation we’re sending out specifically to people. I’m sending out that invitation to Nathan Stallman, you know, from Noctua. I’m sending it out to my folks at the my friends from from Unreal Engine, my friends at Autodesk.
Speaker5: I’m assuming you’re going to.
Frode Hegland: Put those contacts on the spreadsheet. So we have an overview, right? Because these are people we both need to have. Well, all of us. Ideally I.
Dene Grigar: Listed all of them before we you gathered information weeks ago, and I dropped their names in there for you.
Speaker5: And.
Frode Hegland: The spreadsheet.
Dene Grigar: I don’t know about a spreadsheet. I dropped it in the chat when we were having that discussion.
Frode Hegland: Yeah, but for something like this, don’t you think it’s worth having it in a in a shared spreadsheet? Because if it’s in a chat, you know. Okay. All right. Fine. I thought you wanted to do this. Both of us. Or you wanted to go through these people, too. But if you want to invite people from your side, that’s. That’s fine too.
Dene Grigar: Well, I’m. I guess I’m not following. Did you make a spreadsheet? And I just don’t know about it.
Frode Hegland: It’s been listed on our agenda every day now for at least a month.
Speaker5: A spreadsheet.
Frode Hegland: Yeah, I’ll show you if you look at the agenda. Let’s go. Was it here last.
Dene Grigar: Week because I had graduation with my students? No.
Frode Hegland: That’s fine. It’s been there for a while. It’s the invitation list link. But yeah. No that’s absolutely fine. Okay. So. On the agenda. The next thing is is there anything more Dany on the call for papers just to see if we’ve kind of split ourselves into the. The groups for that. Hang on. I’m just going to find something.
Speaker5: Once. I think where.
Dene Grigar: We left it was there was going to be possibly up to four papers, and I’m doing one and you’re doing one. You’re leading the big one for all of us. I’m going to do my own. And I’m hoping, Mark, I can seduce Mark into working with me on the one on XR and hypertext.
Mark Anderson: Oh, actually, that is good because I already I said I’m, I haven’t got much on paper because an awful lot in here that I’ve been working on in recent months. But yes, I was doing something that deliberately was going to sit alongside what we’re doing here and not try and bounce on it. But it was the thing that I’ve been looking at was a reflection on where the edge of a document lies in the, in the digital world and the implications, implication for our design tools, because essentially we don’t seem to produce we don’t have any good tools at the moment for writing the sort of documents that we want to put into XR at the moment. And then please don’t mistake, don’t misread that as a canard about author. That’s not my that’s not what I’m implying there. I just mean that as you begin to think more thoroughly about deliberately designing a document for that, you’ll tear apart in, in a, in a space like XR, then I think you actually want something that sure as heck ain’t going to look like word. I don’t know what it is, but I don’t think it’s going to be what we have.
Mark Anderson: And that’s sort of. So that’s the angle that I’m, I’m kind of covering. I’m also tangential to this, but for those who were in Rome, I’m also doing something with Gabo from Minter, but they’re just showing some stuff they’re doing in Minter. But as it happens, also based on using some stuff from the ACM corpus so that’s sort of in a sense, that’s the three three papers I think might have a hand in. And I’m very happy to do to fold what I’m doing into what you’re planning to do. Because if that sounds right. Oh, and the other thing is on the side, I probably may do something with Adam. As possible. I suspect it will end up as a it will most likely be a demo slash poster paper, but basically. So there’s a suitable point for it. And we’re going to look at some timeline stuff. If we find if we find the time. Adam’s welcome to correct me if I’m wrong there, but I thought that’s something we could do because we can build on some stuff we’ve done already.
Frode Hegland: So let’s try to let’s try to list these and the and the agenda now. So, Danny, from your experience and also you Mark, of course, the paper that explains what we’re doing here that basically highlights Andrew’s coding. Considering all we have left is workshop for blue Sky, should that be a blue sky or a workshop or a human workshop?
Mark Anderson: I it’s not work. Well, it’s I think it’s either practitioner because you say it’s something we’re actually doing. It could as easily go in the blue skies, to be perfectly honest. They’re both the same length paper. And at the end of the day, if we put it in the wrong track, the committee will generally punt it into the track. They wanted it. And so I don’t I basically the, the papers are the same length. They go into the they basically both submit on the same day. So I think the main thing is we get ourselves cracking on an up to six page paper, minus references that covers this activity and what we’re doing. And also points to the demo that, that we obviously intend to have. I thought we.
Dene Grigar: Were talking about this being the blue sky paper photo. We said that to Klaus, and I think that would be the best way to go. And if it doesn’t work.
Speaker5: They’ll say.
Frode Hegland: I’m just trying to. I’m just trying to get clarity over these different papers so we can label them correctly and kind of knock this one down. Right.
Mark Anderson: And as I happened to be on the committee, I’ll just say I wouldn’t I really wouldn’t sweat it too much. As it happens, the likely things that we’ll submit to in terms of the the two things mentioned, they both got the same deadline on the 26th of May. Until closer, closer. Play Hawaii time. They’re both six pages. I’m very happy to set up an overleaf, even if we want to do some of the earlier drafting and something else. But in terms of actually getting it through the final deadline, the easiest way is to do our submission. But for the 26 in Overleaf, I’ve got an account that allows me to share. So it’s not a problem for any of those papers. Okay. To, to to anchor something to to do in a sense that the, the the final submission a say final because I, I don’t want to cramp anyone’s style because, you know, latex isn’t everyone’s first love. It isn’t mine. But I’m just pragmatic about that, that end point. Okay.
Dene Grigar: So just to just to clarify one more time. Rhoda, you’re going to make a paper for the team, and it’s going to be a blue sky paper. And at six pages and it’s due May 26th. Deeney is going to do a paper that’s also blue sky. Maybe human doesn’t matter. Six pages or less. Working on my XR and hypertext. Those are the two papers that are coming straight out of the project. Mark is doing one as well. He could be working with me. He could be doing his own.
Frode Hegland: Danny, do you have a working title for your paper?
Dene Grigar: Yes, I put it in the notes to you the other day.
Speaker5: Okay.
Frode Hegland: Is it a problem for you to speak it again? Would you mind?
Dene Grigar: Yeah. Let me pull it up. It’s right here.
Frode Hegland: Okay. Mark, do you have a working title? So it’s Mark and Adam collaborating, right? Sorry, just to make sure. Is that right?
Mark Anderson: That’s a separate thing. And that that one that was. That’s basically me saying yes to Adam a couple of days ago. So I don’t think we got as far as the title. But basically I would envisage that being essentially a demo with a, with a, you know, like a poster type thing to go with it that gives us a suitable reference more for the community to, to make use of it. I mean, because I don’t know how much. I’ll talk with Adam if we want to make if we want to make more of it, we can. But I think probably it’s a small paper.
Speaker5: And as I mentioned.
Dene Grigar: I already dropped articles into the base camp. So if anybody wants to join me, I’ve got four references to start. And I did a lot of digging last. I worked last Sunday, most of the day doing a literature review. And I’ve picked four articles. There’s not much on XR in hypertext, which is great. It’s virgin territory means it’s open field. Lots, lots of things we can talk about. And I’m interested in how this how hypertext and XR fit together.
Frode Hegland: So I have. That’s great. I have three papers now. What am I missing?
Mark Anderson: Hold on. I should just putting the chat. I’m absolutely sure it won’t be the final title, but that’s my working title for my Blue Skies paper. To stop me coloring outside the lines. Really, because that I think the central thread of it is trying to link it back to hypertext.
Frode Hegland: So this okay, so this one that you just pasted in is by you.
Speaker5: Yeah.
Frode Hegland: But it’s citation views is by Mark and Adam.
Frode Hegland: The dini one. Dini. You want contributions from someone who would like to write.
Dene Grigar: Yeah. You might want to.
Speaker5: Okay.
Frode Hegland: Yeah. The, the reading and XHR blue skies that I’m the main author of. You’re all automatically contributing authors to that one. You may choose how much or as little as you want to actually write. Okay.
Mark Anderson: I also, in case it helps. Deanie I’ve put the bib. I put bib tech up for you. It’s in the thread in the slack where you posted your your text. You.
Speaker5: Thank you.
Mark Anderson: I don’t know whether you’ll be going that route, but if you are. Anyway, I just thought it’d be helpful.
Frode Hegland: So, are we done with the papers? That is everything covered. If you reload the agenda, put it in there as well. What? I can form it at a little nicer, of course, but for now, that looks right. Right.
Speaker5: That’s good.
Dene Grigar: Okay. All right, I’ll clarify.
Speaker5: Perfect.
Frode Hegland: So demo by Fabian Fabian has something to show.
Fabien Benetou: I do. And to limit the risk, I’ll. I recorded two little videos and I have. A link that might even still work. So you please let me know if you can see my screen.
Speaker5: Yes.
Fabien Benetou: Super. So the first one is, as most of you know by now, I give workshops to kids. And when I started to do. I repeat from the start, it’s a bit abstract from writing papers remote at least. But I want to highlight from the store that the. The kind of content and the interactions can potentially be the same. And I will explain also a bit more at the end. So hang with me, despite the graphic looking quite childish. So it’s the same web page as usual, but it’s sort with a menu. So you have six mini games now. And you can select each of them. You have a little 3D model and then you can control it. So here, of course, you need to go through the maze by controlling it. The, the text here is code. And then you can see it in French. The what? The hero, let’s say, must do and then with animation at the end to to confirm it, the code is not hidden, even though it looks a little bit rough. To clarify that it’s not magic. It is code. And that code can be edited. That’s another type of game where you put, like, shapes and form in the right place. When you. Right. I’ll skip the different games. The last one you heard, I guess I’ll put the sound also back is Simon the.
Fabien Benetou: And I’ll stop the memory game. And I think there was also one I must show, especially for what I mentioned last week. This is a voxel painting. So you move, you pick a cube and then you can clone it basically. What I. We’ll not show you right now is that, as I mentioned last time and I briefly told before say before that the, the content here, the position of the cube, the color of the radiation is in the hash of the URL. So if I send the URL with those that shape to someone else they can continue the game, they can change the content and then pass me back the URL. So again, I mentioned this because instead of a set of colorful cubes, you can have a set of papers with the number of the lines highlighted and send it to a colleague and you’ll have the same behavior basically. And from this again to detach a little bit from the childish aspect. I can show you this. So it’s the same environment, but totally different content. And I’ll scroll. So now I’m in VR and I use my mouse wheel to scroll down. And I hope you can see here we change from it’s coffee machine, actually 3D printer, but you’re supposed to position that box and then as you scroll down you press the button that’s normally in front of the 3D printer, and then you change the filament at the back of it.
Fabien Benetou: And then it’s finished. You can go back and forth and in real life it gives this. So it’s downstairs in my little basement. So I adjust the cube roughly where the 3D printer is. I can put more specific details because, of course, you can’t put the cube inside the printer or whatever tool you want to learn. Also, I mentioned this here, but I’m thinking also of fab lab or wet labs, where you’re building such tools for the research, not just for fun, let’s say, or hacking around and you can snap it in place, etc. and then you’ll see when I’m satisfied with the position or it looks good, then I can do the same thing, namely the steps I see where the button is. I can do with the actual button. Next step. And then I can look here, I fake it, but otherwise you would have the filament at the back. So I show you those two things, one after the other, because even though they look completely different it is the same environment here. It’s more game like here, it’s more not pedagogy, but how to discover a lab or workshop in order to build things. But it’s exactly the same code behind it. Voila.
Frode Hegland: Very, very cool. First question how do you know where you are in the real world? This webXR have access to that at all.
Fabien Benetou: So it depends what you call the real world. You don’t know where you are. That’s why. Do you mean for augmented reality at the end?
Speaker5: Yes.
Fabien Benetou: So you don’t know. And the headset doesn’t know. That’s why the first step is to put the cube in the right place.
Frode Hegland: Okay. So you navigate based on the cube okay. That’s clever. Yeah.
Fabien Benetou: The cube is is the thing. So here it’s a small one in practice I really recorded this ten minutes before the meeting. The cube should have been for example I would say three times bigger here. But yes, the headset right now has no idea. I was discussing, though, with a friend a couple of two days ago. He’s building a video game for the the quest? Three with pass through. It’s called. I’ll do a little bit of promotion because I think he’s doing some really great work. Well, he did cubism before, and now he’s doing that. I don’t remember laser dance. Why do I mention him? And I think we should invite him, actually for presentation is because the game tried to make a puzzle with laser based on the outline of the room, like the physical room you’re into. So basically, he spends his days and nights understanding how the technology work, and he has like, hundreds of virtual layouts of rooms in order to test that the generated puzzle still makes sense. And we were discussing and I was arguing with him that. Tracking basically works perfectly. Today, like up to the centimeter, roughly. And I think it’s not going down like the camera are still going to improve. Computer vision is going to improve. So I’m doing an easy version today namely that you do the job, you take the virtual cube, you put it on the physical space where it should be. But my bet is in the next couple of years, even that step will be removed. You will say, oh I don’t know.
Fabien Benetou: I’m looking for a mug. And then it’s going to put I mean, it’s already possible today, actually. You just need high end hardware like a super powerful GPUs. More than the quest three or even the Vision Pro. Maybe the Vision Pro. Actually, I don’t know. But yeah, my point is. Sorry, I’m a little bit long there. Right now it’s literally manually putting the cube. We’re there. So leaving the intelligence to the human. But my bet is in a couple of years at most, that step in a lot of situation will be removed. And last thing, actually I need to implement this, but. If you use the the the room setup of the quest even the quest one, I think it detects which room you’re which I don’t want to say room. Which room setup you did. So now, if I start here in my home office and not in the basement with a 3D printer, it can distinguish both spaces. And thus, if you’ve put the cube in the right place, like this afternoon, and you come back tomorrow, it should, in theory, be back in the right place in theory. Because if you shuffle the room around, if you had a poster and whatnot lighting is terrible. It’s not going to work. But so to answer your question there, in a bit more detail is if you put the right cube yourself in the position today, and the quest recognizes the room among its list of setup rooms it should be back in place well enough, I think, for for this kind of tutorials.
Frode Hegland: Yeah. Very nice. I can’t be the only one without. With a question.
Fabien Benetou: That’s. So I don’t ask a question, but I give you a little tiny bit about the philosophy behind it. The goal, and that’s why I started with a kids version, is it’s literally the same principle. And I do hope that Everybody if they want to and if they have access to can do learning in situ. So in the lab but the lab is, if you’re a kid, like, I have some Lego behind me, even as an adult. That’s your lab. Let’s say that’s the kind of thing you learn with and again in space and intangibly. So I do hope I see it as a, like, lifelong learning kind of space, both in VR and in AR. So that’s kind of like a continuum that I hope to keep on working with more mini games or more setups. And for example, the I bragged about doing the, the welding couple of weeks ago, I guess. And the goal is that this is to me, I, I don’t really know how to do this. Well, the 3D printer I know well, but the workshop with a welding stations is new, so I want to bring this kind of things to the workshop there. And to. Yeah, whatever labs I think have like when you enter the room, you’re like, shit, that looks amazing. The potential of things we can do, the kind of experiment, etc. but it’s overwhelmingly complex. And I think this kind of setups where you can learn and even use in-situ alleviate some of those pain points.
Frode Hegland: Really, really nice. You know, quite simply.
Dene Grigar: If I can ask, I will have asked the question. So this is wonderful. And I think it’s always great to use very concrete. Simple. Demonstrations for important big ideas. So you’ve done a great job with that. Moving that into the work we’re doing. I’m imagining that we can adapt this for rethinking how text works in virtual space, right? I mean, that’s what you’re getting at. So that that is what I’m interested in. Right. So can you talk about how you’d extrapolate what you just did for a text? So in other words, the text would be. We’d be able to do what? Move around. How? I mean, that that to me is what you’re trying to get to.
Fabien Benetou: Yes, yes. So indeed, I, I start in fact, I had an evil I don’t want to say dump a simpler example, which was the coffee machine so that everybody can relate to. If you have more than one button, you might get stuck. But I thought the 3D printer looked a bit cooler. The idea is more like you’re reading a paper about 3D printing techniques. And you’re reading the paper at some point. There is some experimental protocol, some way to do this. So that would be, let’s say, the easier transition in our context when you have the content is text and is like the paper is next to the 3D printer because you need to get those steps there and in context. But again, my, my hope there is to keep on abstracting away so that if it’s something that is like linguistics even and grammars that to give those otherwise how do you say that non unbodied things like really abstractions. Yes. And that’s also why like I have the, the Lego Education spike kit behind me is because one of the things you do so you have if you don’t know yet, I mean I really recommend it. It’s just fun. But you have like, little servos. So little. How do you say that? Not engines, but motors, basically. And to control them, to program them, you use scratch basically. So block based programing. So that’s kind of, I think the way of making intangible things or, I mean, they remain intangible, but abstract concepts. More graspable or give them some affordances. So that’s that’s.
Speaker5: Respond to that.
Dene Grigar: So there was a company here in town that my program did some partnership work with some years ago called Realwear. And they were making these virtual reality or augmented reality glasses headsets that people, for example, if you are a telephone repair person and you would climb up the pole to repair the wires or some sort of cables for the telephone, people would climb up to the top. And if they get to the top and they realize they need some information from the manual, they’d have to go back down to the ground, read the manual, and then climb back up to the top and do what it is. And they were going up and down, up and down all day. So real. Webxr developed this augmented reality headset that had the manual embedded in the glasses. So they could see the manual as they were working in the working on the object itself. They ended up making a lot of money. They expanded it into other kinds of fields, and we worked with them on like tourism, like how how can tourist agencies use these headsets for giving walking tours of the towns and things like that, historical walks and things like that. So we did a project with them on this. So what I’m seeing with this, though, for our project is that for example, I’m working on. I’m working on text. I’m I’m working on archives in my office like I was yesterday. I’m sitting there with box of material papers sitting out in front of me. I need to consult information from another, another place, other pieces of writing. And I don’t have to go back and forth. It’s all there. The writing is right there in my headset as I’m looking through the glasses at the archive. So that might be one way to think about this as we move to the future.
Speaker5: I think.
Fabien Benetou: Yes, it does. And I think one of the challenges in that situation is like the mythological aspect is what is the the token? Is it like one, let’s say box of documents? Is it a single document? Is it a page of document? Because once you say, okay, it’s blog based, it’s interesting, but what is the blog itself represent? And this I don’t I would tend to think that it’s a little bit like when I put the cube in, or I let the user put the cube in front of the target. I think here it would. It probably will always imagine, I don’t know. I mean, you can have heuristic, but I would say it requires the expertise of the person doing the manipulation. Like for example, for you, what’s one unit? Is it is it a document? Is it a box? And I literally don’t know. Yeah.
Dene Grigar: Yeah, that makes sense. Thank you.
Speaker5: Hello.
Dene Grigar: Mark. All right.
Mark Anderson: Well, got some new buttons now, haven’t we?
Speaker5: Yes.
Mark Anderson: Sorry. I think we’ve put Leon’s queuing in the margins, everyone.
Speaker5: Lay on.
Speaker8: I got unmuted by the host, so thank you. Host. Yeah. Yeah. One question. Actually, for Fabian and perhaps also for for the host. Dear fraud is have you ever thought about you know, having this sort of depth dimension in XR and how it could potentially impact the perhaps different ways of showing information based on your distance to the text or to a certain button, whatever. Because what I’m thinking is that if if you look at a book, it has a cover, and usually that is seen from far in, in a shop, for example, you see a cover with a very nice image, very big text, very catchy. And, and then you basically walk towards it and then you have to open the book in XR you have like if you have text very far away, you might not be able to read it. So you, you don’t really know if you’re attracted to it. So I was thinking about for example, these children and how they could move the penguin. And yeah, I was thinking like, have you thought about, like, should I first present let’s say something familiar, like I don’t know, like a PlayStation controller. And then when you walk to it, you can see text or like, right now it’s, I think like it’s it’s going to be text like, this is their sort of like how they will connect with it. Like, I’m curious about all your thinking behind that and, and why it ended up like this.
Fabien Benetou: So, so it’s actually not just text. There is audio also like there is text to speech because the youngest kids, they just can’t read. They can recognize letters, but honestly may can’t full words. So there are also there is also speech. But I think, I mean, my, my and one, one mechanism also I did not show because I haven’t done that properly, but I think it’s important especially for games. But it’s the same principle for tutorials. Like for example, if you do the next step, next step, next step, you’re in front of the 3D printer and somehow the person is not clicking. Next step. So you know, there is a problem if after one minute there is no new action, something is off. And basically, if the goal truly is to teach and to let the person do more than they could do initially, which I think, again, even when you’re reading a paper or if you start to use a machine, that’s the point. Like you need to be able to do more than you did before. You need such kind of feedback mechanism. Like no action for a while. Well, like what’s happening? And then maybe a reminder like, oh, in order to do this, you need right pinch. In order to do this, you need less pinch, etc.. So basically a feedback mechanism that is in my opinion, I hope that answers your. Finally, your question is more and more explicit. So initially you start with honestly too much and then as things because I think in term of pedagogy same if you solve the problem on behalf of the kid with we read a paper and I give you my conclusion without you having the time to draw yours, I don’t think it’s great because you lose your perspective. So I think gradual and more and more explicit feedback is is a good heuristic.
Frode Hegland: So my comment on that is very constrained by current experience. Because actually working with the headset is in many cases quite different from what I expected it to be. The use cases I’m still working on are generally desk based. Generally forward facing for most of the work. So Bruce Horn accepted my suggestions document to give to the right people at the vision team yesterday. And in their. One of the key things I complain about is how if you use the vision and you turn it off and on again, it loses all the spatial layouts, which is a major issue. Of course, webXR is different, but of course webXR doesn’t know the rules. I’m not a fan of VR unless it’s at a playground with a huge room, because I don’t like fake walking. I even think that teleporting can be bad except for, believe it or not, Fabians teleporting that we saw the last demo. That makes sense for that kind of information, because you’re teleporting to a thing rather than teleporting to keep jumping around. It’s very different. So I think as a research topic, what you’re talking about is hugely interesting. The notion of having book covers, I think is very, very important. And things such as that now in the paper that I’m, that I’ve started playing with for our hypertext, which the actual working title is actually High Resolution Thinking in journals currently. I forgot that earlier when we were writing things down. Just working title. The notion there is also what is a document.
Frode Hegland: This is very much related to what Mark was saying a short while ago. So I believe strongly that the end user should be able to choose their own bindings of a document, same as a publisher. That we’re lucky enough that last year’s hypertext conference is released as one PDF as well as individual documents, for example. This goes into the notion of what a book cover is. Even today, if Danny buys a book and I buy the same book because we’re in two different markets, the book covers will most likely be different. So just the whole idea of if it is an artistic display to help you find it again and sell it, or whether it’s auto generated a really, really interesting topics. So I think this is definitely somewhere to to keep going. Now the map thing that we looked at slightly earlier. One of the things that needs to be possible to do in a wild view is to save views. So I’m wondering if views can become book covers, so to speak, and what happens when you leave a view, when you make it smaller, what attributes are presented in it becomes a very interesting issue. Way back with macOS eight, I did a lot of testing with various designs of what a folder could automatically show what’s inside. You know, if it’s some images, it’s relatively easy, but if it’s text document, it becomes a different thing. So thank you for your provocation on this. And over to Mr. Anderson.
Mark Anderson: I was just thinking on the the point that’s raised. I was probably triggered by by I think it was the Fabians observation about saying, well, you know, what is the token passed. And and when you were talking about things like book and album covers, I was thinking certainly if I sort of could just magically sort of produce that as an object to put as a label on something to do sort of scenario, that’s probably what I would use because when you’re this is the difference between, I think personal workspace and a shared workspace. Somebody coming into my workspace doesn’t I don’t expect them to understand the, the, you know, what’s been put somewhere just because I put it down there and what’s what has a real place. So the point being that even if that blob over there has a book cover that actually isn’t the version that I’ve got, but it’s the version, it’s the thing I remember. I know that if I go and if I whether I move to that thing or bring it to me, that what lies behind that sort of memory palace object is the part of something that either say, a part of that book or or the book is the trigger to me to remember what’s there. Then I can have quite quick recall of things, and obviously things that are more pictorial are probably, I suspect, works slightly better at distance than text. Because, you know, we’ve discussed this thing before about does text scale with distance or does that not scale with difference? Well, if it doesn’t scale with difference, then it rather breaks down your depth perception because it that plays with your knowledge of of perspective.
Mark Anderson: Plus it just gets noisy. Whereas actually seeing some, seeing some in a sense some pictures and you know what them are, we’re quite good at filtering that out. I don’t have I know what that is. I don’t have to worry about it. And our mind can just flip over that and look for the thing that we wanted. I think the challenge with that. Sorry. Before I go on to the so part is that I think that can get to the thing about what is the block. Well, you just give it your own label. The problem is when you want to do that with other people. Because for yourself, this block A can contain 15 times as much as block B, and I don’t care because I just need what’s inside. In other words, the block is an afford the the size of the block to to try and to link back to sort of feminist point earlier. It’s almost incidental for me. The main thing is I need to have a handle to find the thing I want, and I need to sort of be visible in the space somehow, and not to be intrusive, but to be sort of visible. And in a way that I can quickly remember, have recall what it is now if I wanted to do that for a team of 20, I think that gets incredibly hard because rather like the example of, well, two different people with a different book cover, it really does matter if the picture is wrong, because actually, whatever it is you choose has a you normally do it because it has a strong association, so you never have.
Mark Anderson: You don’t have to learn relearn what that picture means. You know what that picture is and you. And then you can attach a further meaning to it if you’ve got 20 of you all trying to. Oh, is that the picture of the thing that I actually think of something else completely different. I think it gets incredibly hard. My temptation therefore, at this point is to at least look at the personal workspace challenge before trying to sort of scale that up to the collective one. I know everyone, you know, it’s the thing that everybody knows and wants, but so much collaboration actually is just a people stand standing in the same room shouting at one another, and there’s not much collaboration. But I think if we mine into it, if we’re able to mine into perhaps the easiest space of what these representational handles mean in the VR space for the individual, I think it will make it easier to then widen things out. Lots of hands up. I’ll shut up.
Frode Hegland: The question of the size of the box, I think is really, really important. But I think we need to look at a design language for that. So we have to design decide what the smallest units should be. And what they should be presented within the kind of stuff we’re doing. Because if you are interacting with something and it turns out it’s an entire library or it’s just one sentence, these are obviously very different things. In a digital environment, there is no native affordance to tell us this. It definitely an important part of the discussion to.
Speaker5: The work of.
Frode Hegland: Liam.
Speaker8: I totally agree that it’s an important part to work on, because a while ago, you also asked, like, what do you really what do you want XR to be? Or let’s say as a minimum, you know, what would really excite you or sort of expectation wise? And I remember I answered nigh infinite zooming and I have to hand it to Fabian because I’m now thinking again about his interface and the fact that you can see it as text. If I remember correctly, you can also see in most of your prototypes, the function name of something, and you can just tap it. So, so basically it’s it’s not a black box. So for example, a game is a bit of a black box because you press a button and you don’t know, really, you know, what’s behind it or what is the code. And, you know, going back to this infinity kind of thing, I could even imagine the following and a small A little bit of context in JavaScript. Every function can be called with dot two string, which means it will give you its own textual representation implementation of of itself. So you can see the code behind the function. And in a similar way I could imagine imagine in fabienne’s prototypes to see the function name. And instead of tapping it or activating it, you go a really closer, really close to it, and then it sort of like poof, you will see the implementation version behind that function name. And I think these kind of transitions, which is basically it goes from header to the paragraph of the header, like these kind of transitions, I think can be come very important for the future because I think right now we have been writing the paragraph below the header heading, but I think with depth maybe it’s time to basically transition like let the header or the code or the function, sorry, the function transition to you know, it it’s details detailed version like the code or the paragraph. So that’s just my few cents.
Fabien Benetou: I’ll jump there. It’s it’s So first, nice because I didn’t even know that introspection mechanism. There are quite a few languages that do allow to check, not just execute, but. Yeah, see, what’s behind the curtain kind of JavaScript is a little bit crazy with this. It lets you do a lot of stuff that you shouldn’t be able to do because you’re probably going to break things, to be honest. And but also what I find beautiful about it is like, you want to go mad, go for it. So I think it’s really interesting. And I want to also I put back the, the short talk I did for the future of text. Was it last year on the edit button, having an edit button to edit everything? And it’s that’s that’s exactly this kind of mechanism. It’s like by default you don’t want to see how the sausage is made. You don’t. You just want to eat the thing. But if you do, you should not have artificial barriers for it. So if you can have some kind of like the closer and closer as you showed with the webcam, you get to it. Why? Why hide it? It’s like now we can play with actual scale, even arguably dimensions with whatever. Like we shape the space literally to what we want. And if we want to keep on looking closer to the point that we see inside the box and how the box is made, and eventually even adjust a little parameter so that that box becomes another object. I think it’s, it’s it’s it’s just empowering. So it, it’s definitely aligned with, with all those ideas. So yeah. Makes a lot of sense to me.
Frode Hegland: So Mark, go ahead. But we need to move on to Andrew as well in a minute.
Speaker5: Of course. Okay.
Mark Anderson: So very quickly I’ll pick my words carefully because I’m trying to avoid something being a sort of like a binary zero sum thing, but I’m, I’m actually cautious about trying to decide sort of label and design what size representation things are. And I just had this with another project I’m in where someone has decided to define everything we’re doing from the base of the world downwards in metadata. And I said, well, have you started looking at the metadata? We actually the data we actually have and they said, no, why? I said, well, it doesn’t fit your model. Why don’t you start with what we’ve got? And I’m tying that back to here. I’m thinking that it it does, it does take us down a slightly different design path. Because if you if something can be whatever size it needs to be, your question then becomes, well, what is this what I interact with it? How do I know? How does it tell me what it is? If I start from the premise of saying, I will decide what all the things are, and I will have a taxonomy, and I will have this, that, and the other, then our, our, our creative world has to fit inside that framework. And I think a little of me dies when I, when I think of that neither is wrong. So it’s not that the two are the only two things in there in total opposition. There they are on everything’s on some sort of spectrum in a sense, but I think there’s a lot to be said for holding on to the human element. We don’t tend to think in hard edged boxes and frameworks and stuff. Most of our design tools tend to be quite hard edged, so I’m all for keeping this loose as possible and shifting the identity problem into one of discoverability.
Frode Hegland: Yeah, I think that’s fair enough. But at least for the work at hand, we do have some level of granularity defined. We have one. A boundary is a document. So, you know, there are many ways you can view the contents of a document, of course, then a paper. And then outside of that, you either have a collection slash library or you have a journal. And then you get the interesting other boundary, which is temporal. Earlier journal editions. And then of course, you have people. So there are many, many, many things that relate. And I think it’s really important that we allow the user to navigate along any of those axes to find and see what’s relevant to them. But there are a few things that we know we have. So we should You know, go go back and forth on this one. Probably. Right. Any final things on Fabian’s work for now?
Speaker5: Thank you. Fabian.
Frode Hegland: So in the agenda, we have what we have already gone through. So I just want to check. Then we’re happy with all the stuff for inviting for the symposium. All right.
Dene Grigar: Can I recommend something? I just saw your list. This email list, I thought was for the book.
Speaker5: I think it’s. We also.
Frode Hegland: Yet. The thinking was that if we invite someone, we invite them for both.
Dene Grigar: Okay, well, let’s step back for a second, because If you look at the symposium that we put forward for the Sloan Foundation, we’re looking at students. We’re looking at a whole lot. We got 50 people we can put in the room. A lot of these people, like Astrid Jensen, who’s a good friend of mine, will not be flying in to give a paper. Right. She probably won’t do that. So we’re we’re looking at a long list of people, many of which will probably not be able to come to the. Symposium and present. Right. Going to call some money. It’s in the middle of a school year, all that kind of stuff. So I think we want to think about who we want to invite to the book. At the symposium. We can also have some people there. I would like to have people there from my my region so that we can start to build this concept of what we’re doing in this part of the world. We’re spreading the news of feature of text in the United States for the first time. So I had you over for dinner. When you first got here, I introduced you to folks from the game industry very interested in working in VR. There are folks in Portland. There’s a the Technology Association of Oregon. So I’d like to say that I could at least have, you know, maybe ten seats that I can give out to some movers and shakers in town who may even give us money to fund a lunch or fund a breakfast. I mean, one of the reasons why I want to reach out to these people is to get some funding.
Dene Grigar: And if we say you can’t come unless you give a paper, I can tell you that Nathan Stallman Jordan Gibney from Autodesk, they’re not going to write papers, but they may want to come and hear what we’re doing. We’re also looking at more grant money in the future and Autodesk and Unreal Engine epic gives out big grants. So I think it’s important that we’re not just looking at the symposium as people giving papers, but also an audience where we’re spreading the news, right. We’re not just talking to ourselves. I also plan to bring students from my spatial computing class, not all of them, but the top students, right? And I’m also hoping that the Warhol Foundation will give me the grant that I submitted the proposal for. And we’ll have an exhibition of student exhibition of works, as well as other artists. Right. So what I was imagining is that we have two lists. We have people that are going to be in the book. We want them to, you know, write something, and then there’s going to be a list of people who are going to come to the symposium to give a paper or be present. Right. So that some of the names I threw out at us and put into the chat a couple of weeks ago were people I thought would just, we need to have come. So shall I start? And also this is view only. I can’t edit it. So if you give me the rights to edit, I’ll make a second sheet and call it invitees.
Speaker5: Well known participants.
Frode Hegland: Yeah. I think this is very I think we’re talking very much the same thing. What would you want to invite to have in the room? I think we all just quite simply trust you. I don’t think we need if someone is not going to be speaking, you know, absolutely fine. Whether student or someone in the region. Absolutely. I don’t feel we need to have them on the list. The idea here is just that when we invite someone to actively participate, they’re invited to write a paper and or be part of the symposium. So they’re kind of the ones we want to be okay with our name being associated with.
Dene Grigar: And I’m not seeing my name on here. I don’t see Mark’s name on here. I mean, I think there’s folks in this room that should be listed on here.
Speaker5: Well, this.
Frode Hegland: Is this is our list for who to invite, and we are already invited. So this is not an attendee list.
Dene Grigar: Okay. Well the room holds 50 people. So be nice to know who will be invited so I’ll know how many seats will be open for open seating. That’s what I’m trying to get at. I’m trying to think very kind of minutia information.
Speaker5: Yeah.
Frode Hegland: Right. So you say that this, this 50 seats and not that many people are going to fly in from all over the place. And this list so far has 45, has a lot less because some of them are empty spaces that that many people on here. I just didn’t want to invite anybody without you agreeing. Maybe there’s someone that there’s some kind of an issue with. And also if there’s someone you want to invite to be in the book and to present at the symposium, they should be on here. So if you just put I can I can invite you, of course. Or if you just click Request Access, then I get a one button email or something. That’s probably the easiest. I’m very happy for you to to write in here obviously. I can try to make it. Okay, I made it. Anybody with a link can be an editor because it’s not a shared link. Anyway. Should be safe enough.
Dene Grigar: And I apologize for not understanding. I thought this was for the symposium. I mean, for the book only because it said Future of Text, volume five. I did not realize it was the symposium. I do see them as somewhat separate.
Frode Hegland: Why do you see them as separate?
Dene Grigar: Because people who are writing, who are wanting to present must do it in person. And so the symposium, the physical space. We want it to be full of people. We don’t want to have everybody online watching, no one present. So who’s really going to be presenting?
Speaker5: Yeah, yeah, yeah.
Dene Grigar: The number of people in the book are going to be different than the number of people presenting because of the fact they can’t come here.
Speaker5: Yeah. I’m also one more thing.
Dene Grigar: I’ve got a plan for food. So how many people am I going to feed for free? And so I’ve got to start thinking about the $3,500 we have for, you know, the the breakout sessions, you know, the tea in the afternoon, lunch, that kind of stuff.
Frode Hegland: The thing is, the book and the symposium for four years now has been the same thing. The book is basically the proceedings from the symposium, where people have an option to have a transcript of their presentation, which is not brilliant, or a paper of the same thing or something different. So that was the same thinking here. So you know, if you haven’t if there’s anyone on here we should delete, you know, we delete them, that’s fine. You know, we have very different views of the industry. So you may know something I don’t. And also if there’s someone you would like to have present and that goes for all of you obviously if you have then please put them in here. Unless someone is literally the king of the world, we do not want zoom presentations. Denny and I have decided if you want to present, you absolutely should be there in the room. But if it turns out you cannot, and all you want to do is provide an article for the book, you know, that’s fine too, if it’s someone we’ve invited. So I think we’re definitely on the same page of this. This is No, it’s not going to be a free for all.
Dene Grigar: You’re not going to add a second section that says people to invite who will not ever publish anything in their lives, but we want them there because they’re important people in the region interested in VR and XR.
Frode Hegland: So what I have done previously with people like that is ask them to submit a paragraph.
Speaker5: Well.
Frode Hegland: And if you look at the last two books, it’s been very, very interesting. We have a few pages where we have thoughts on industry leaders or just don’t have time to write something big. They put something in. A few of them actually got inspired because we asked so little of them to write a whole piece. That’s how Stephen Fry was the one to finish our last book, for example. I mean, this is not an academic journal as such. And I’m not saying we have to pressure them into to to writing. Not at all. Especially these games people. We would like them to be present and it should be nice for them, of course.
Speaker5: Okay.
Frode Hegland: You don’t seem very happy with that.
Speaker5: No.
Dene Grigar: It’s fine. We really move on.
Frode Hegland: Okay. So you will go through the list and tell me if there’s anyone to delete, and then you’re going to have a separate column for people to invite or just going to be in the seats. Right.
Dene Grigar: I probably won’t delete anybody. I probably would just add a list of people that I’d like to see come.
Speaker5: Okay.
Frode Hegland: Oh that’s cool. And guys, all of you, if you have anybody, whether you know them or not, if you want, if any kind of just them, share it right now. Andrew, you’ve been very patient for an hour and 15 and we have. The oh. Hang on, I haven’t linked it on the web page for those who want to try it that way. If you want to start talking, I’m going to provide the link separately to to your demo.
Andrew Thompson: Cool. Yeah, I was just trying to pull it up. But if you’ve got it all ready for everybody, that’s great. Yeah, I’ve got a little video showcasing the stuff. It’s a very quick devlog. It’s just you can read documents now. Main content. There’s some nuances to it. It does very minimal formatting. Similar to like reader, where you have the headers formatted differently. You got the title, the author, and then the rest of the content is just kind of there. Images do not render things like tables bullet points, things like that. Don’t. Don’t get formatted. They’ll show up, but you’ll just kind of get, like, line breaks, because it’s very simple right now. And that’s fine for just reading the content. But that’s where that’s at. It kind of has a border around it. I tried something a little bit different. Closer to the experience we did for the focus. So it has a sort of off white background with a border, and the border has interactions all around it. So on the left is the handle, so you can drag it around like normal. But then the top and the bottom can be dragged to expand or shrink the document. So you can see more or less of the text just kind of organization. And of course, there’s a scroll bar on the side. Which can be dragged. But if you don’t want to drag and you just want to kind of continue reading, you can tap the the two scroll arrows at the top and bottom, which will sort of scroll the page by half the view width. So hopefully it’s it’s usable. For the first time in a while. It is actually supported by the save system on Wednesday instead of something I have to put in later. So that’s that’s useful. And that’s pretty much it. Lots of stuff changed in the back end, but doesn’t really matter in this situation. Yeah. I see your hand.
Speaker8: Yeah. I was just curious about the scrolling. Did you eventually go with the technique? You were thinking of last week? We talked.
Andrew Thompson: Yeah. Ended up having to Because there’s two types of clipping that I was considering between. One was a post-process clipping which would use sort of like a mesh and then clip through it by rendering it twice with post-processing. And I did not want to do that, so I used the troika. I think it’s clip rect. Basically you can you can pull the bounds in on a a single troika element. But since I have every element rendering on its own, I just needed to do a bit of math to calculate all of them together. So it only clips the ones at the top and then hides everything else. Ended up working pretty well. Just took a little bit of setup. I was surprised with how good the performance was. I was kind of expecting some lag because of all the calculations, but it runs great.
Speaker5: Going to record a little.
Frode Hegland: Bit while you’re talking. Yeah, I’m very impressed. And I have to say, your design style minimalism is very early Mac slash futuristic in a very good way. So that’s really lovely.
Andrew Thompson: Thanks. We can we can throw in actual graphics at some point. I don’t have the time for that at the moment, but no, no. Support it.
Frode Hegland: I can’t see we would we would need that. Anyway. That can be an ongoing discussion. Stop. You. Just want to make sure it’s saved. Yeah. Good. Yeah. This is really, really nice. Really, really nice. Anyone else? Thoughts? Comments? Actually, yeah.
Speaker8: I have to. I have to go, unfortunately. So thanks a lot. And Yeah, it was a pleasure. Yeah. Cheers.
Frode Hegland: See you soon. Bye. And the text is obviously not as readable as as elsewhere. And I actually don’t have much of a problem with that, because in a sense that is outside of our world, we expect the systems to get better at that.
Andrew Thompson: So when you say obviously less readable than elsewhere, are you talking about in web webXR versus native? Okay. Yeah. Yeah. That’s something we always knew would be a problem. Yeah. He’s going to say, I don’t think it’s any worse than what we’ve been working with so far with the citations. But yes, I.
Adam Wern: Had I had the opposite reaction that it was the I found it on quest three. I found it very readable, kind of pushing the limits of quest three, but it’s I can read quite small text quite well here with the color combinations and the, the minimized aliasing and so on. So I think.
Frode Hegland: Are you talking about and the library list or are you talking about.
Adam Wern: No, no. The actual the actual document was is surprisingly readable for me in quest three. I think there is a combination of. Well, it’s just my my opinion. I think it’s perfectly readable with my sights and my setup here. Of more than expected. But I don’t know how it looks in in the vision. We’ve been doing some tweaks with the kind of rendering at very high resolution on quest and the scaling it down. I don’t know if that translates to vision. Apple vision. Apple vision does not run in full resolution in webXR. So there could be that the quest is catching up. And that text here is quite good in quest three, which is good for us.
Speaker5: I’m not.
Frode Hegland: I’m getting it to load properly now. This old. They stopped me looking again.
Speaker5: Lending.
Dene Grigar: Really well for me. It looks good. It’s a little bit small.
Frode Hegland: What is.
Speaker5: Small.
Dene Grigar: The text is small.
Speaker5: Right. But.
Dene Grigar: It looks great. I like the I like the image. I like the environment a lot.
Andrew Thompson: And I still have sort of in my list of notes changing the the background color a bit. I know we asked for a test of a slightly darker environment. You know what what.
Adam Wern: What we need to keep in mind is that both Quest Pro quest three, quest two, and Apple Vision Pro all have different panels, both panel types and panel bright max brightness. So there could be a substantial difference. So if we pick one combination, it will not look the same. Because the panel types are very different and the optics is different. So either we find some good defaults for each, each system and try to display that for each system. Or we add some sort of user defined as we talked about color profiles early on. Yeah.
Andrew Thompson: Having a palette swap would be cool. I think also just for the sake of showing the project off we don’t need to worry about it. Just design it for a specific headset and just that’s that’s enough for this. Since we are kind of just building a prototype. But if we have time, I’d love to get color palettes in there. Pretty easy. And they’re cool. People like them.
Frode Hegland: Yeah, and they are important. While we were talking earlier, I told the programmers, for instance, an author, to give me a high contrast mode. So they should be able to do that in a few days. So even little comments like that show how important it is to to do that testing. So yeah, I think we should prioritize that. Fabian.
Fabien Benetou: Yeah. To me so first, it’s a very readable on the quest three. The, the resulting and the resizing, the flow of the document, once open to me, is very good. So there is I’m not sure what best what what else could be done there? So that’s great. Now, if I put the Vision Pro, it looks like shit. And I think it’s honestly just due to the WebEx implementation, it’s a shame that we can’t ask Brandel about this in more detail, but I he would know and probably have suggestion because it’s kind of his job, I guess. And we know indeed it’s not the hardware, neither the, the rendering or the screens or everything mentioned before. So let’s bug Brandel about this. Yeah. To me, it’s much, much more readable on the quest three than on the the Vision Pro. I don’t know if you have the same experience.
Speaker5: What do you think?
Andrew Thompson: What are those of you? Oh, you go first.
Frode Hegland: Yeah. Sorry, what I’m saying. And the list of documents, the kind of library. I don’t have a problem with that. It’s fine. When I then go in, it’s incredibly flickery as though it’s waiting for foveated rendering, and it just doesn’t do it.
Andrew Thompson: I wonder if it doesn’t have foveation turned off. So I wonder if the vision just ignores that.
Speaker5: Well, I mean, it’s.
Frode Hegland: It’s the it’s they’re so annoying when I record this, it doesn’t really reflect what I see. The heading look fine, the really big text, but the smaller text to me, there’s like a a.
Speaker5: More.
Frode Hegland: You have to set up a moiré pattern or something, almost like, yeah, it’s it’s not good. But I.
Andrew Thompson: Guess what’s going on.
Fabien Benetou: It’s aliasing I think. Yeah.
Andrew Thompson: Yeah. So I wonder if it’s also resolution based because I believe webXR runs in half resolution on vision. I don’t know if that’s been changed yet, but because the text is a lot smaller, you may be getting it kind of rendering halfway between pixels, and you’re getting kind of that effect. It’s unfortunate that you can’t capture it on video because I’m not exactly sure what it looks like. I’ll take a look next time I get access to Denny’s vision. But I don’t think there’s anything I can do about that.
Frode Hegland: I’m not sure if I can explain better what I’m talking about.
Fabien Benetou: But I can try to explain. But I know that Randall would know, so I think I would again bother him.
Speaker5: Can I ask a question?
Dene Grigar: Can I ask a really simple question? I’m wondering. I mean, at some point things will come together. Apple will improve its ability. You know, it’s going to compete with the quest, right? It’s going to do better. Is it possible, though, when you enter into the space? When you when you like. For example, you look at the web site. Right. And we see it in the Vision Pro and it comes up for us. I’m wondering if we have. A setting for quest. Quest and a setting for Vision Pro so that if you’re using the Vision Pro, you go to this particular website. If you’re using the quest three, you go to this one. And maybe there’s a slight difference of the. Are options that you are able to turn on and off.
Andrew Thompson: Andrew, I think I might be able to actually detect that by default. You might not even need separate pages. The thing is though, what would we do with that information? Because I can’t up the resolution no matter what. Because that’s a limitation that the vision has imposed.
Speaker9: Yeah, I could make the font.
Andrew Thompson: In general larger, maybe.
Dene Grigar: Yeah, I would say so, but but I think it I mean, I just think about the early days of the web folks. I mean, I’m, I think I’m older than the rest of you, but I remember when people would make net art and they would say, best viewed on Netscape 4.2, you know, doesn’t can’t see this in Firefox, right? We used to have to do this until at some point things kind of collided and we were able to open it up in any, any format. Right now it’s nondenominational, as I call it. Yeah. We’re not there yet with this technology. We’re very denominational. Yeah. We don’t have standards yet. Although we’re working.
Speaker5: Yeah. I’m with.
Dene Grigar: So can we.
Adam Wern: I’m with Eugenie here, I think. I think for the time being, or of course, we could wait it out a bit and we can here with Randall if some things are going to improve, I think for both there will be kind of profiles that would I think there are some tweaks we can do to Apple Vision and to quest to get the the best quality we can push the headsets for. And it will be varying some of the parameters we had with Foveation, with the with with the resolution we render to, which is not the same as it will be displayed in, but the texture will render to background colors the color combination. It will look different in Vision Pro and an inquest, as I said, because the panels are the display panels are different, so we could pick colors and perhaps even font default font sizes that look better. It will be some work but not as much work as. One might expect because these are variables we send into the system. So having a profile, fetching those variables from that profile will be not that much work, I think. And it could be worth it, but it could also be a bit later thing when we know what things we need to. It could be a kind of before post the month before postman tweaks to get it look perfectly right, because some of these things we may not even know what we need yet. There are more things to try out, and also to ask Randall what the status of it. We should really ask him.
Andrew Thompson: And if we get lucky, maybe it’ll update. Give us give us some more resolution.
Dene Grigar: We could just quickly send an email to Randall and tell him this idea and say, should we work on this now without making him have to commit to and see what
Mark Anderson: Internet explorer for visionOS. Oh, dear.
Dene Grigar: Anyway, so that. Yeah. So I think that’s something we can think about, Andrew. And I’m sure Randall will come next week. We can ask him, tell him what our idea is and see what his response is. He might say, well, you could just wait a little bit on that. And that’s all he has to say.
Speaker5: But Fabian.
Fabien Benetou: So I I was the co-founder of a startup that tried to do this for 360 videos, and I gave up, so I really recommend not doing this. It’s important if you make a product like, and it’s kind of the promise of webXR you write code once it works everywhere, but if you.
Adam Wern: So it’s Fabian, what is this? The this you refer to. What do you recommend against.
Fabien Benetou: The content per device?
Adam Wern: Okay.
Fabien Benetou: So if, of course, if it doesn’t work and it’s been promised, it’s going to run on the Vision Pro. It must be done. But I really advise against this because you can really go. It becomes very complicated quite quickly. And I don’t think at the exploratory stage it’s the point. It’s more like, does it even make sense, what’s interesting, etc.. And because it is a lot of work to to support it, even if it looks like, oh, there are, there is just the Quest Pro, just the Vision Pro, just the quest three. And I mean, it’s a never ending thing. So I if there is one way that the code doesn’t change per device that really, really much better. And yeah, I would, I would so.
Adam Wern: Or are you talking about functionality new functionality like turning it off or are you talking about brightness, color and font sizes? Because I think the the later changing the font size and the color palette or perhaps even a screen resolution setting of aviation is very different than that. One is much easier to set than to turn my custom functionality on and off. So we need to be clear. On what.
Fabien Benetou: Yeah I would well, so it’s the same the difference between a product versus exploratory, the functionality that makes sense. If it’s a product, then you can have the different options changing fonts or changing resolution as, as an option. It’s it’s not super hard, but it’s still a complexifies the code base whereas if it’s functionality and it’s, for example, eye tracking and it’s been never tried before to, I don’t know, manipulate the content of the document with eye tracking itself, I think, let’s say in term of research, it’s interesting and I think it should be done. So I would argue that. Going crazy with functionality, even if it’s only per device. I guess it’s more interesting in terms of loss and learn. Even if it’s complicated in term of maintenance and in term of maintenance, every time it’s specific per device. If it’s not done right, it’s really it becomes super complex.
Frode Hegland: So my perspective on this is very clear. We put a little bit of effort into optimizing, but in the different platforms, but not much because I feel that our work is to look at interactions beyond anything else. How to use the spatial dimensions. Everybody knows that text on the vision is super crisp and fantastic when Apple wants it to be. The fact that in webXR currently isn’t is kind of a future issue. I think it would be a rabbit hole for us to really try to optimize beyond some basic font sizes and things like that. There’s so much interactivity we need to do. I think we should you know, when we put the headset on someone’s head and probably be in in Poland. But what headsets will we have with us? We’ll have three visions, and then we’ll have a couple of Of course. Three right. And one pro what will be most of them you know.
Dene Grigar: I’ll be carrying three the Apple Vision Pro, quest three, and quest two.
Frode Hegland: Right? So yeah, I really don’t think we should optimize for product. I think we should optimize for interactivity. So if someone comes out of the experience and one of the headsets and say on the vision, it was hard to read, but the interactions were there. But I tried it on the quest three and it was much easier to read. I can’t really see how we can improve what’s on the vision if Brandel gives us some special access or knowledge, of course. But it’s not like Andrew can, you know, add some code and suddenly it’s better on vision, then he would have done so already. Right, Andrew?
Andrew Thompson: If I knew of the code. Anyways, there’s a chance there’s stuff hidden that I don’t know about. Yeah.
Adam Wern: We managed to tweak some things a while ago that made the text clearer that were unknown to us. Branded tips tipped us off of one setting, and I found another setting, and we tweaked it and and it looked much better so there could be more of them. And there could be combinations that are better as well, but at least font sizes and distances are definitely one thing to experiment with, and it’s individual as well.
Dene Grigar: I’m just thinking, I mean, we teach we teach design in our program. There’s two types of design, right? There’s the esthetic design and there’s UX, UI and UX. Ui is functionality which includes interactivity. And so when I’m thinking about using this for my own research, like I’m going to now use the Vision Pro to do the very things that I have talked about, I have to fucking see the text. If I can’t read it, I can’t work with it. So yes, there’s going to be some design aspects we need to embed in this, whether it’s for Poland or not. At some point.
Frode Hegland: Unless Brandon says no, sorry, I didn’t mean to cut you off, but I’m just saying.
Dene Grigar: Cut me off.
Speaker5: But that doesn’t matter.
Frode Hegland: Please continue. Please. Please continue.
Speaker5: Not just I have to.
Frode Hegland: Okay, okay. I will catch you off. Andrew is doing a really good job. He can’t just go in and change the setting, and it looks better on vision. He would have done it.
Speaker5: So whether it’s.
Frode Hegland: Ux design, graphic design or whatever it is, there isn’t a magic thing unless Brandel knows something we don’t know. So of course, that was.
Dene Grigar: What I was trying to say. That’s the point, is that let’s talk to Brandel and see what can be done. And also this could come with time. There may be another update to this. We also don’t have to work with the Apple Vision Pro. We can work with the meta quest. Not a problem. But what I’m saying to all of us, just to finish, is that we have to have something that’s readable to do reading for academic purposes. And if we go to Sloane and say, look, we made this really great thing. And by the way, it’s hard to read, but, you know, whatever, that’s not going to fly in. Marc’s laughing. I mean, it’s true. And I’m not trying to be difficult. It’s just a logical thing.
Speaker5: I don’t I.
Frode Hegland: Don’t know whether this agreement is, I think in terms of where Anders should put his efforts. I think that’s where we’re discussing to do different kind of profiles for the different deliveries, for the different systems. I think that would be, as Fabian has said, very much going into doing something that’s going to spiral and become quite massive. And if we make some changes, then they may not flow across to the other platforms. But of course we want to make it readable, but as we make it more readable for the vision, it should also become more readable for the quest. But until Apple updates the way it does webXR, you know, we can’t separately update it for for the version, right?
Speaker5: Yeah.
Mark Anderson: Are we capturing this this really interesting stage in our in our sort of in our narrative because, I mean, it’s it’s unsurprising to me that this is happening. It’s on one level, you know, it will all get it will get better in the future. But but but ironically this issue about a text being a bit fussy is, is kind of a non-trivial thing at this stage. So it’s definitely something we want to capture because I think if I was the sort of Sloane people reading this, this is something I really want to know, because it’s just the kind of thing that somebody with, with, with a big enough sort of brown envelope might be able to lean on the appropriate place to say, no, no, you need to make a correction or, you know, you guys, you different commercial entities need to talk to one another. So I think that’s a really useful thing for us to capture. We’ll have to muddle through. I it did lead me to another thought before it passes, which is that thinking of the demo in Poland, I. I don’t think it’s cheating for us to to to take Mario or whoever. And basically first before we show him the demo, we show him something that’s a bit like a test card that enables him to say, can you read the text? Because the one thing we can’t see is what the person wearing it sees. And it’s it’s pretty critical that we know before we show them the demo. Sort of the degree to which it’s. In other words, we need some sort of a test card almost, that we can say to someone, okay, you put this on.
Mark Anderson: Can you see this? You know, how crisp is it? What what do you do? You know and we make any adjustments we are able to make or if we or if we can’t, we can at least see we got some idea as to the quality that they will see of what we’ve done. And it’s silly things like just testing whether they’re able to use the if they were reading glasses for correction, whether they can wear those under the headset or not. And I think that’s something we should give some consideration to. I think it takes. Hello there. I see a little person coming on screen. I think it’s something we should give consideration to, because I think it might take also a little pressure off the off the the construction of the demo. Because we’ll be able to say, okay, well, fine. You know, if you’re if you’re struggling to see this aspect of the overall presentation, if what you then see is slightly sub-par for we can say, well, that that is for a reason we know about. So it’s not because it’s badly constructed. So you’re going to have to allow for that. But there are others in the room who will be able to see this in more fidelity. And then having said that, two, two practical things is I can read the article nice and clearly on the quest tree. So thank you very much. I’m still struggling to use the menu bar. I’m sort of trying to, I just cannot it sits there on my chest and I can’t interact with it. Which is probably why I haven’t found I haven’t yet found my way to close the the article to get back to the library.
Mark Anderson: Now that I’m not saying that as a kind of what that’s saying to me is something else in terms of demo terms, is perhaps we need to think about we almost need a help panel, just, you know, just how do I get out of here? Because the one of the problems here is that the more you’re dependent on your experience a lot in this, the more you use it, the better you get, and the more intuitive your guesses will become as to how to do things. Our problem is to show what we’ve done and spent a lot of time doing to people who don’t have that expertise and probably don’t even have the patience. Whether they just expect it to work. And we know that that’s probably a high hurdle at this point. So I don’t know if there’s some way that we can think of some sort of a thing, some sort of a help mechanism that you can access inside the environment. Because having someone standing next to you because unless they have co presence and I don’t think we can do that at the moment or I mean, or I don’t know if we’re planning to do that, then if I’m standing next to the person watching the demo, I can’t see I, I might be able to see a pass through of what they see, but it’s still quite hard to know exactly what they’re experiencing. But to have something they can access, you know, just literally an ejector seat button or something. Just just get.
Frode Hegland: Right. Yes. This is something we’ve discussed and agreed with. The touching the sphere on your hand is supposed to be that. So? Based on your experience, we may very well need to to upgrade how useful that is.
Mark Anderson: I can get, I can get to the menu on the I can open the tab bar menu. I just can’t interact with it at all. It sits very, very and I don’t know, it’s something to do with maybe the way my I’m using just a sitting sort of bound on it. And it’s something to do with the fact that maybe my hand isn’t far enough away within the virtual environment for the for it to be picked up. I but please don’t take this as a critique or I don’t think it’s an error. I’m just practical experience.
Frode Hegland: It is critique. Of course it is, and it’s very useful. I so far haven’t really used it very much because all it does is is scaling. That isn’t useful very many places. So if considering we’re running out of time today, but if we can all think a little bit about how that should be solved, either by having maybe we don’t have it in front, maybe all the controls are on your arm or other things. I think that would be really useful contribution to next week, because if the help doesn’t help, it’s not very useful. So that’s that is a big deal Mark. So thanks for that. If I.
Speaker9: Could respond really fast.
Andrew Thompson: To yeah. So Mark the prism menu. It was an old design when we were trying to do stuff physically. So it doesn’t point, it touches. You’re supposed to touch the buttons. So no one expects that now because everything is pointing based. Perhaps it’s time. I mean, it’s over time, but we haven’t put it in the list yet to redesign the menu at some point. And maybe the way to go is to make it all point based, have it appear somewhere in front of you. It’s going to look different, but, you know, something like that. We could we could totally go that route. Even the touching is pretty buggy. Because it’s just for development stuff.
Mark Anderson: The other thing I’ve been amazed to discover is I can’t I can’t see it. A noticeable difference in tremor between my left and right hand, but boy, is it different inside the quest. So one of the big problems I’ve got is trying to do any text editing, because the the built in text editor in the quest, the keyboard has no arrow buttons, and the one thing you can’t do easily is in the middle of a a 40 character URL. Just find the one button you can’t arrow back to deselect it.
Speaker5: Yep, this is something.
Frode Hegland: I wrote to Apple that should be in keyboard as well. I so strongly agree with that.
Mark Anderson: Yeah, sorry. I’m very conscious that Fabian’s probably got something far more urgent to say.
Fabien Benetou: No, but I’ll say it quickly first. Maybe. Actually, for Brandel, he does have like a single word that Andrew can put, and it solves everything. Like that’s what I put on the chat, like an option in the render and boom, it all works. Maybe he doesn’t. Maybe it’s not even possible, but for sure he would be the person to know. Now, in terms of I don’t want to insist too much, but in terms of, let’s say, readability of text, the demo I tried today to me answers the question is, can you read texts in webXR from a document? And it works. It’s proper. So it’s if we try to prototype to answer such a hypothesis, namely a question that is, can I read text that is long a proper document in Xoar today? To me the answer is yes. Can anybody read that? Maybe not, but I think maybe that’s it’s a deeper question or it’s another question. Let’s say and also once again, we’re at the worst moment. Like every headset is going to only be better than this. So again, I think it’s safely answering that hypothesis with a positive that a text document coming from research can be read today. It doesn’t have everything like LaTeX and whatnot. We’re discussing a bit before, and Mark was mentioning on slack research documents so much more than quote unquote just text. There is so many things to, to do. But again, on the on that hypothesis, in my opinion, the test of today shows and it’s a powerful answer. It’s something that a lot of people, I think might not even have considered, and that one is sold. And I think formulating it properly, I don’t know how like that’s a lot of thumbs up, but it’s good through a video, through a document, something in my opinion, it’s a worthwhile and powerful answer that’s already produced today. Yeah.
Frode Hegland: So in closing, we have two minutes until our main people have to go, and The text. Reading in an XR is a done thing. Reading in native version is flawless. That’s done. It’s over. We are trying to experiment with interactions. Of course we should make it as legible as we can. No question about that. You know the font sizes and everything. But there is a yeah. So let’s just not worry about that. We’ll work on it. But I don’t want Andrew to start going in trying to overly optimize what Apple is making rubbish on purpose. Right. Because they could make it brilliant. It is. You know, it is. That’s just that when it comes to the actual design what I showed earlier today with the kind of the bar at the bottom, that’s something I would really like to discuss more in terms of Andrew’s work now, because we’re getting closer to that. And I think we need closer to discussing whether something like that is relevant, because one of our key Corpuses is a journal. It will be the journal, so it’s not right now we have a library and we have documents. So if we can all just think a little bit about how we want the journal, the hypertext journal to be readable, I think that’ll be really, really worthwhile. I’m so grateful, Andrew, today, what you’ve done, you’ve gotten to a beautiful stage of moving things around. The fact that they’re a little fuzzy is not your fault. You know, on the vision. I’m very much looking forward to Monday. If anyone has any further names, I ask you every week, tell me Danny and I are going to start inviting people to sit, people to talk, people to write. And this is an absolutely brilliant time to come with specific design things. And, Mark, your job is to design your way out of the little triangle. Any final thoughts? We have 30s. That sounds good. Okay, excellent. Have a good weekend. And week, I’m going to go and celebrate with Edgar. I don’t know what he wants for dinner, but it’ll be whatever he wants, so we’ll see.
Dene Grigar: Happy birthday, Edgar.
Speaker5: Thank you.
Frode Hegland: Seven.
Speaker5: Seven.
Frode Hegland: Madness, madness. Love you all. Thank you for this. This is a very grateful for all of you. Bye for now.
Speaker5: All right.
Chat log:
16:01:04 From Frode Hegland : https://public.3.basecamp.com/p/fRBNiN656TjiKDGrN3e38zJP
16:03:08 From Frode Hegland : https://public.3.basecamp.com/p/fRBNiN656TjiKDGrN3e38zJP
16:10:41 From Dene Grigar : except me
16:12:50 From Andrew Thompson : I believe dark mode has surged in popularity because more and more people are using screens late at night, and the bright white is just too much in those situations. Perhaps not academics, but think developers/gamers/hobbyists
16:14:17 From Dene Grigar : It could be that I have been working with computer interfaces since 1982 that it is what I am used to, Andrew
16:17:04 From Mark Anderson : Replying to “I believe dark mode …”
Yes, I’m very prone to visual migraines, though for dark mode I need a brighter screen to aid contrast so – against expectation I stick with dim light mode. I want to like dark mode – it looks cool, but I can only stand it for short periods.
16:17:34 From Fabien Benetou : that’s a risk
16:20:08 From Fabien Benetou : ? https://www.openstreetmap.org/way/629712485
16:28:05 From Dene Grigar : XR and Spatial Hypertext
16:29:01 From Frode Hegland : Reading in XR ‘Blue Skies’. Main author Frode, contributions from anyone.
XR and Spatial Hypertext’ Blue sky’ By Dene, contributions from anyone.
Citation views By Mark and Adam.
16:29:15 From Mark Anderson : My working title for my Blue skies is “The Inner Hypertext of Digitally Native Documents”. almost not the final title
16:35:40 From Frode Hegland : How do you know where you are in the real world?
16:44:27 From Frode Hegland : Go ahead Leon
16:56:03 From Dene Grigar : brb
17:00:44 From Fabien Benetou : I didn’t even know that, nice introspection mechanism!
17:02:05 From Leon van Kammen : function foo(){
console.log(“hello world”)
}
console.log( foo.toString() ) // hello world
17:03:46 From Fabien Benetou : console.log.toString()
17:06:15 From Fabien Benetou : never final 😛
17:09:09 From Mark Anderson : Recursing: console.log(console.log.toString())
17:10:17 From Fabien Benetou : Replying to “Reading in XR ‘Blu…”
and me on manipulating grammars in XR as possibly workshop
17:16:09 From Frode Hegland : https://futuretextlab.info/2024/05/08/8-may-code/
17:17:27 From Leon van Kammen : Unfortunately I have to go in 5 mins
17:21:04 From Fabien Benetou : yes, very readible IMHO too
17:21:18 From Fabien Benetou : doc view
17:29:09 From Mark Anderson : I’m hoping we miss X version 5.5. (bad memories of Internet Explorer)
17:29:22 From Dene Grigar : Internet Exploder, as we lovingly called it
17:29:39 From Dene Grigar : 🙂
17:30:21 From Mark Anderson : Reacted to “Internet Exploder, a…” with 👍
17:34:10 From Frode Hegland : My pref is on interaction.
17:34:27 From Mark Anderson : Article text is v. readable for me on the Q3. Still struggling to use the in-view menu bar (via wrist). I can get it to interact with my hands.
17:36:02 From Fabien Benetou : might be a single threejs parameter for the renderer
17:43:13 From Fabien Benetou : yes, help in the experience itself
17:43:17 From Dene Grigar : I need to leave for campus precisely at 10 to make a meeting I have this morning
17:45:30 From Frode Hegland : Andrew, do you have to leave on the dot today?
17:45:46 From Andrew Thompson : Yes, at about 9:55
17:47:23 From Dene Grigar : andrew I am obviously going to be late to our meeting. Let Holly know
17:49:17 From Andrew Thompson : Reacted to “andrew I am obviousl…” with 👍