27 March 2024

27 march 2024

Andrew Thompson: Progress.

Dene Grigar: Are you? An early morning riser?

Andrew Thompson: I used to be. Haven’t been for a while. Yeah.

Frode Hegland: Ask that. You may have ruined his programmers credentials. Night owls. Yeah. Can you, can you? You can’t see me, can you?

Andrew Thompson: No. You’re a great boss again.

Frode Hegland: You’re a.

Dene Grigar: Great box. I apologize. Big box.

Frode Hegland: I’m going to move things around there a bit. Andrew, have you posted a bill to slack? My slack isn’t loading, and for some reason, I can’t even log in, so I don’t have a. Yeah. So.

Andrew Thompson: No, actually I posted yesterday that I will not have a build today. Unfortunately, no. That’s fine. It’s I’ve been busy at work, but the demo is. Anything I currently have is very broken. Really not worth showing off, like a half finished thing, so.

Frode Hegland: Yeah. No, that’s absolutely fine. We still have things to discuss. And you’re going to be in Dennis lab this week, so you’ll also be on the vision as well, right?

Dene Grigar: He was there yesterday. Are you coming in yesterday?

Andrew Thompson: Actually.

Dene Grigar: Than yesterday?

Andrew Thompson: Yeah. I won’t be in the rest of this week. Unless I’m really needed.

Dene Grigar: So what do you want him to do, Frodo? Was it all day yesterday?

Frode Hegland: I would like him to try reader and author as a tester, because that may help him get a better clue for how we can do the integration that we’re supposed to have. Because we, you know Danny and I had a little chat, as we usually do before the meeting. Andrew. And one of the things we have actually in the Sloan thing is user puts on headset, and the library is available, and this is to integrate with the reader and author stuff. So we’re going to talk a little bit about that in our session now. And this is something that you have a huge opportunity to dictate how you would like it to work. We’ve gone through it in the beginning of the year, but now we’re at the three month. We need to you know, take a stock check and see how to make things work.

Andrew Thompson: Yeah. In a sense of libraries, I thought we were planning on having them kind of, like, contained as JSON objects with a bunch of the URLs to the files that were inside said library. Yes. Is that not something that fits with the Sloan grant? I haven’t. Yes, right over the sun grant since we started, so I’m not sure.

Frode Hegland: It absolutely does. But the problem with you brilliant programmers, and this goes for the rest of you annoying creatures in the group, you know a lot more than I do. So there is. I still have some holes, so before it actually happens to me, it’s still very vague. So we have to decide when to prioritize it. And also, what do you want my programmers to do and how do you want them to present the data to you. But yes.

Andrew Thompson: Okay.

Frode Hegland: Yeah, definitely the wrapper for this.

Andrew Thompson: We will have to talk a bit about how you want integration between the two to work. Because they’re dramatically different pieces of software. Like you actually have a piece of software written in code that runs as an app. And then this is all constrained by web, which is a lot more limiting. So If you just want to export like a collection of links, that shouldn’t be a problem. I don’t know. We’ll have to figure that out. I guess that’s today’s discussion.

Frode Hegland: That is definitely one of today’s discussions.

Dene Grigar: I also, I think the value of looking at author and reader because we just to recap just quickly before we get started, we originally were writing this so that author and reader were the technologies we were using. That was the original. Draft, and then we pull that out and made it PDFs because we, the, the Sloan Foundation wanted nonproprietary non and everything open source. Right. So you pull that out and but the the advantage of going back and looking at author and reader is that they behave in a way that we want the PDFs to behave, you know, they do things that PDFs don’t do. And so I guess the question I ask is, can we make this environment? Emulate simulate some of that stuff that reader and author can do. And then that begs the question, what can reader and author do? So we start by looking, kind of looking at what they’re doing inside the space now that they’re inside the space. It took a while for photo to get them, get them programed for it, and then we turn around and then try to apply that concept, those concepts to the PDF. Does that make sense Andrew?

Andrew Thompson: Yeah, that makes sense. But what’s confusing me a bit is you have reader and author in XR. So we don’t want to recreate that in a worse environment. That doesn’t make any sense.

Frode Hegland: No, no, no. The magic. Is this the idea? The dream is this. Hang on, I’m just going to give you some links there on the sidebar. Someone is. Okay. So native applications have high resolution, so I expect that a user will use something like author to write when it’s just writing, and expect them to use something like reader when they just read. But when they want to do something more involved, they click a button and the same stuff opens through a web browser. In this environment, that’s the kind I see. Okay.

Andrew Thompson: Yeah. So it sends the data to the link. That opens it here.

Frode Hegland: Yes. Yes, exactly. Because I’m currently charging not I mean, I’m charging for myself for because I have no choice. But I do expect to make author and reader free. But that’s not enough. This stuff needs to be open source. So I’m just representing another external thing that hopefully other companies will emulate the way we do it, APIs or whatever.

Dene Grigar: So welcome everybody to our meeting. I just dropped the link to the agenda in our little chat there. The zoom chat. Yeah. Mark sends his regards. He can’t be here today. So I think this might be. I’m not sure if Rob’s coming. Have you heard anything from Rob proto?

Frode Hegland: No, I haven’t, so I think we have quorum or whatever they call it. Yeah. There are two links in the in the sidebar. One is exactly that which is useful. Please open up the base camp agenda. And then there is a link to a video recorded yesterday and only just now updated. I’m. Hang on. I’m just going to put my phone on. Well, privacy mode because people are texting, which is not what we need right now. Focus. Yes, focus. Excellent. Right. So to Peter. This is a funny thing. I haven’t talked to Alan in forever, but I had a brief chat with Adam yesterday, and he mentioned that he talked to Alan a little bit, and he is also in yeah. Reading what you’re saying, Peter. Anyway. Peter, you still haven’t had an opportunity to go to an Apple Store to try the headset, right? Because it’s the same with Alan. Apparently, he leaves, lives five minutes away from an Apple store, and he hasn’t tried it. So I’m wondering what it is about you New Yorkers refusing to get your head in the game, quite literally. Yeah, I maybe it’s something about New York. I just thought it was a bit funny since it was just the two of you.

Peter Wasilko: Well, the Apple Store in New York isn’t sited very conveniently for people coming in.

Frode Hegland: That’s quite a that’s more than one though, right?

Peter Wasilko: Yeah. There’s one in the West Chester, which is the big mall with some of the world’s worst parking.

Frode Hegland: And then there is the one in town.

Peter Wasilko: And going into the city. Well, you’d need to be wearing. Well, no, see, you’re not allowed to wear bulletproof clothing in New York anymore. They made that illegal. Meanwhile, everyone’s being stabbed and shot at right and left. So I wouldn’t go into the city unless I just taken out an extremely large life insurance policy on myself and had someone who I wanted to be my beneficiary. It’s insane down there now.

Frode Hegland: Okay.

Dene Grigar: Let’s say worse than that is being pushed onto the tracks of the subway.

Peter Wasilko: Oh, yeah. That’s bad. I did have a friend of a friend who was actually wearing chain mail on her way back from an SCA event, and someone tried to stab her and couldn’t figure out why the knife wouldn’t go in. So yeah, that’s New York.

Frode Hegland: Yeah. Oh, well. Okay. Peter makes it clear. Come to London. Try it here. Right. So I loved.

Peter Wasilko: London when I was there in law school. Yes. That’s a civilized city.

Frode Hegland: Yes, except a lot of people say it isn’t anymore. I disagree anyway, so, yeah, let’s look at the the agenda here. So I hope Andrew and Danny on that device. You can really try author and reader zone. I really, really would appreciate your feedback. I cannot test it because I don’t have an American account, meaning that I accidentally delete the test flight and I can’t reinstall it, which is absolutely crazy. Same with you, Fabian. I look forward to testing comments from you. That’s really all I have to say for that now, except to say please click click on the second link that I posted. You can just leave the sound off, but it’s just to show you. An environment that I worked in yesterday. It’s relevant for our next discussion bit.

Speaker5: Is it good to do? Stuff now? Or even better.

Dene Grigar: Okay. Shall we get started then? Oh, yeah.

Frode Hegland: I’m just asking if you can click on the link to look at the minute long video that okay.

Dene Grigar: So that is that in our is that in our agenda.

Frode Hegland: No no no I just posted the link here in just in our chat.

Dene Grigar: Okay. So we’re not going to follow the agenda okay.

Frode Hegland: It is the it is the agenda. It is. Author and radar. Vision. Testing and progress. It’s related to that.

Speaker6: Japanese society in the 1600s had a very specific work. Here played an important part in In Stature.

Frode Hegland: Look at the beautiful flowers.

That’s the secret.

Speaker7: Your mission is to attend Jasper and at his north birthday party on 18th May. So. One, two, three.

Frode Hegland: I expect you have seen it. Just please tell me when you’ve seen it so I don’t. We don’t just hang out.

Peter Wasilko: Watering it now.

Frode Hegland: There is no interesting audio, just in case you’re wondering.

Dene Grigar: I’m wondering.

Peter Wasilko: Hairstyles in ancient Japan looked intriguing.

Dene Grigar: That’s Karate Kid, right? Oh, Shogun. Shogun. Okay. I’m wondering the documents down below. Or you can add material. I wonder if it can be. Not horizontal, but vertical running beside the actual document. So you can you can do this.

Frode Hegland: Yeah, it can absolutely do that. That’s trivial. That’s that. You can do that yourself as a user. You can scale them and put them anywhere you want. And the act of doing that, I think is very educational for how Apple has done it. Sometimes it’s really great. Sometimes it’s not so great. And Yeah. Okay. So everyone’s seen it, right? Enough. Okay. Because it’s mostly the beginning, right? Term Denny book launch.

Dene Grigar: Hey, I want to thank everybody who came. It was a good turnout. We had 47 people, and then I had about five. That couldn’t make it because of doctor’s appointments and illnesses and stuff like that. But it was a great turnout. But folks from, you know, almost every continent, you know, Michael Joyce was there. So a lot of hypertext pioneers were in the room and they hadn’t seen each other in years, which I thought was also exciting. John McDade was there. John McDade and Michael Joyce are part of a group called Teaneck back in the late 80s, early 90s. And yeah, it’s recorded and know that I’ll be putting this online once Andrew edits the video for me. He’ll get it today. So he has it’ll probably be up next week anyway. Yeah, it was just really wonderful. And it was heartening to see everybody together. Some of these people are very old, some are have cancer. So we managed to pull people together at a really good time. So it’s a historical document as much as it’s a celebration. The book is out. It’s free until the 21st of April as a PDF. So if you want to look at what we’re talking about in terms of textual issues for born digital text it might be interesting for you. And but if not, thanks for listening. Working on the next book now.

Frode Hegland: But next, are you writing a book about the next or just the next book?

Dene Grigar: There’s my book.

Frode Hegland: This call is sponsored by.

Dene Grigar: It’s pretty Cambridge Press. I think that’s cool. But yeah. So the next book is going to be like a history of born digital literature. And I’ve got I’m sitting on all the archives in the lab. So it’s going to be fun to write. It’s going to take years to do that one though. Yeah. That’s very good. Yes, absolutely. Peter. Nick Montfort, twisty little passages, fantastic book. I’ll type in the title in here for you. Wonderful book. Came out a while back, but I still. See it as the Bible of the history of if. It’s published by MIT press. You got it right here. Montfort.

Ron Swigart: Intensity.

Dene Grigar: We see little package passages, an approach to interactive fiction. And I think it came out 2004 three, 2003. And Nick Montfort is the kind of king of if. Literary if.

Ron Swigart: It.

Frode Hegland: If you mean interactive fiction.

Dene Grigar: Yeah, yeah.

Frode Hegland: Yeah.

Dene Grigar: Yeah. Mit is great. That’s that’s probably where I want to put my next book. All right. So next on the agenda. Back to our agenda.

Frode Hegland: Yeah. On Monday we’ll be talking a little bit more about inviting people to the book and symposium. Please think about who to invite. Right. Just think crazy. If it’s someone you know, great. If it’s someone you don’t, please just start sending it in, because we will start inviting people soon. So that’s that’s that simple.

Dene Grigar: I want to mention we don’t we didn’t put this on the agenda announcements for Frodo, but you and I are participating on Sunday noon, my time in a discord meeting with VR AR people. And I’m seeing that event as a possible way to recruit folks for this group as well as for our symposium. So. Yeah. And, you know, I could talk about what we’re talking about.

Frode Hegland: If I’m going to just put the I think everybody knows them once I can I’m just adding this to the agenda. Please reload, everyone. Jimmy. Six degree of freedom. Such a cool Twitter handle. So he wanted to know what we’re doing, what we’re up to. So we’re going to present that. Thank you. Danny. That was a bit of an oversight for me to forget. Danny you’re muted.

Dene Grigar: Yeah we’re just talking about who to invite. And I was like that’s part of why we’re doing this thing on Sundays to get some more people.

Frode Hegland: Exactly.

Dene Grigar: New people.

Frode Hegland: Any other announcements from anyone?

Dene Grigar: I want to thank Peter for all the information you’ve been posting in slack. That’s been very interesting. So that’s helpful. So just keep that coming.

Peter Wasilko: We’ll do.

Frode Hegland: Yeah. Absolutely. Absolutely. Right. So going on to the first section. So the reason I showed the video and I should probably have shown it now, but anyway, is just a little bit of insight, and I really want to hear the rest of you who’s been using the vision, what you feel, first of all, using it in public. It really depends on where it is. It can very quickly become uncomfortable because I can’t really say what’s happening to my sides. And if I have my work things in front of me, they tend to obscure other people. So if I’m sitting in a place where people can walk, it’s not always that cool. Which was really, really interesting because I thought the R bit would be better. On the other side of that, nobody is shocked by the device. The other day this lady came up to me. She works for a company and she said, oh, that’s the Apple vision, isn’t it? Can you do this? Can you do that? And so she wants to, you know, have a deeper dialog with us. So no one seems insulted or like what weirdness is going on there, but at the same time, you know, the vision, you know, I hope they update the eyes to be a bit clearer, because it isn’t always clear that I can see. It’s an odd thing when it comes to layout of the workspaces.

Frode Hegland: That’s also very interesting. When I’m in an actual environment, I need to have space in front of me, despite the fact that I can overlay that. What’s the thing? I may have mentioned that particular thing before, and also, as you saw in the video, when I put things in my library or dining room or whatever it is, if I leave the room and look back at the room, I can see the back of these screens. So it becomes an x ray view of the house. That was really, really surprising. And I think that’s really relevant because it relates to Peter’s mentioning of memory palaces. But the vision doesn’t have good window management. So if I reset restart anything on the vision, all of these disappear. I have to set it up again. So the vision native stuff. I have a lot of benefits. They also have quite a few drawbacks and how they can be interacted with now. And what is of course really annoying is that webXR, at least on the vision, doesn’t allow for pass through video. So that’s why when we put on the headset, we really are going into thinking cap, and you really have to sit on a swivel chair or equivalent, unless you have the luxury of having an entire room for for webXR.

Dene Grigar: I don’t have the same. I don’t have the same problem with the vision and the location as you’re having. I wonder why that would be.

Frode Hegland: What do you mean?

Dene Grigar: You said that you’re having trouble. Like with the vision part. Like things being clear. I’m not having that trouble at all.

Frode Hegland: No, no, no, I don’t have a problem with the clarity. The clarity is perfect. What was I what else was I saying at that point?

Dene Grigar: You said. You said you’re having you’re disappointed with the this I thought you said seeing through like the clarity.

Frode Hegland: What I meant was I can put screens, move around the house, and then I can literally see through the walls to see the screens, which is really cool. And that really plays to the memory Palace. You put different information in different parts of your environment. However, if you restart the vision or restart the app or reset the view, they all disappear. That’s what I mean. So but these are early days and this is window management. I’m sure visionOS will be updated to deal with these things, but there currently are issues. Other issues are I use mouse and trackpad. I have dedicated for the vision when I’m writing, and the trackpad is surprisingly useful. It’s tangible and it’s nice, but the problem is very often my eyes. I just look at something and it thinks I’m my eye is now a cursor rather than the trackpad. So there are all these teething things that is just worth bringing up. Hopefully will be fixed. Any other comments on experiences with the Vision Pro?

Dene Grigar: Well, I’ve been going. I’ve been going back and forth between Vision Pro and the quest two. Right. Because I have the quest two that the program owns here at the house. And people have talked about how heavy the Vision Pro is, but I find the quest two heavier. The quest. The Vision Pro is much easier on my head. I like the white stripes in. The one complaint I have is that we have to use that battery, right? It’s not battery less, so I’m hoping that the next iteration will. You know, facilitate a change in that. But I find it much easier than the Quest and Quest three. Of course, it has a better see through experience, but I just find the Vision Pro better. The only. The only disadvantage is this is what you and I were talking about before the meeting is like, what is the point of the vision to this Vision Pro that’s different from quest? What does it offer us?

Frode Hegland: And what did you say? You felt the quest has a better see through?

Dene Grigar: No, I said it doesn’t. The Vision Pro does.

Frode Hegland: I just because there are so many aspects to that. Okay, that makes sense, but I. Oh, sorry, Danny.

Dene Grigar: But we were talking this morning before the meeting because we’re meeting at 730 before we all get together at eight. And it seems to me we’ve got two things happening. We’ve got the Apple Vision Pro experience over here and the quest, the vibe riff, all those stuff over here. Right. So that other headsets. And there seems to be at this moment a demarcation between the two environments, right, with these being entertainment. You know, magic, sports and all these things that you can do that’s fun. And and then some basic business stuff. But over here, we’ve got this thing that’s just waiting for a purpose. It’s just kind of waiting for a real purpose. And I don’t think it’s found it yet. And that’s because, you know, there’s people are still trying to figure out what to do with it. And that’s what I see the opportunity for us to define it. So Yeah. Just to finish the thought, though, before I turn this over. Is that so? I’m imagining and I’ve mentioned this to father this morning. I’m teaching that class in the fall. In this on this topic, on spatial computing. And one of the things I think will be useful for them is to make an environment. So if you’ve been playing in the Apple Vision Pro, one thing you can do is that I think it’s very exciting is you can go into a landscape and you can go to like the Joshua Tree National Park or to Island Hawaii.

Dene Grigar: Right? You can just and it’s just engulfs you. It’s immersive. It’s beautiful. Right? Gorgeous. Those aren’t hard to make. Our students can make those. So it’s not like making a game where we have to use unity. They can actually produce that. And I’m imagining that that those kinds of. Those kinds of activities where you’re it’s more of an experience. More intellectual, more more. I don’t use the word spiritual, but more. You know brain than it is just, oh, look what I can do and know that I do like the, you know, the fun stuff on the quest. But I think there is a difference. And we can tease out more and more with our project what that is and with our side quest especially. And I think when we do pick our side quest, which I imagine is coming down the pipeline, is that we should make a list of them and start to think about what we’re saying. The Vision Pro and things like it in the future can do for us. I’ll stop.

Frode Hegland: Yeah. No, thank you for that. Fabian, do you have any comments on using the the vision so far, but also particularly all of you, the text, the reading ability and the vision is amazing, right?

Fabien Bentou: Yes. To me, the the reading is the main difference. Like I go back and forth between the different headsets and yes, there are differences in terms of weight, balance, etc.. But I think for people who would spend an hour or two a day, I would argue there were small differences. But in term of reliability of actual text, which is, I would imagine, like the main limiting factor, because most academic research is it’s not good. It’s not videos. Like it can be a very important part of the publication. In practice, the one type of document is 98% text at least. So even even if it’s like a mathematical equation for a model, it’s in the end, it’s still like a couple of characters. Then if you need to get a headache after ten minutes, it doesn’t work. So in my opinion, that’s the that’s the biggest differentiator. It’s in terms of usage I think it is the same kind of challenge, social challenges as before, namely, yes, people are comfortable with it, or at least they don’t freak out with it. Even in a coffee shop or in a library or whatnot. But are we users comfortable with it? Which is we should be, but in practice, I think there is the kind of like sense of am I being seen? Am I seeing other people? That is still unclear and thus a little bit awkward. And I think for a lot of people, that’s basically solved by like, if you’re in your own home office or office where you can close the door, that big problem goes away. I would argue it’s not really a technical problem.

Fabien Bentou: Maybe it can also be solved by having, like, a little red flag on your desk that say, hey, I’m busy, I’m in VR. I’m. I’m working. And I don’t think there is a necessarily a need for, like, the what’s the name? The the I forgot the name that they use. But yeah, to me that that’s the same way in the desk area where you can have a little flag on the desk, say, hey, I want more meat or I’m done. It’s not it’s not a hard solution to find, but I think that’s still maybe a luxury that most people don’t have. Say, okay, I need to be don’t interrupt me for one hour because I’m reading those documents, editing them, sorting them, doing my murder work, kind of of information. But in terms of usage, yeah, to me, it’s that’s being able to conveniently enjoy actually text. That’s a new thing. And I still need to be yeah. To a little bit like we saw in your one hour minute 30s video through like organizing those things in place, document photos, whatever. And with the permanence aspect to it, you leave the room and you’re still there. Same that that wasn’t feasible until now. It was feasible as kind of low resolution image, but not actual documents that you can interact with. Yeah, with still some limitation. As far as I know, I don’t think you can have like a set of five rooms like this, a room like this in the library and room like this at home and different layouts. So still a lot to explore, but yeah, that’s my naive take.

Dene Grigar: I’ll say, I think space and safety are going to be a problem with X with VR environments for quite a while, right?

Frode Hegland: Well, I’m saying. Yeah, yeah. Sorry, sorry. Denying that connection, I over spoke. Hi, Rob. By the way the three body problem we see in the beginning of it, and it is really good, but I think we need to develop a bit of a language here because. The AR and VR. Of course, we all know what the difference is, but I think the kind of VR that we have in movies where you can walk around and do whatever, and if someone punches you, you feel it. It’s something quite different yet again. It is. Yeah, maybe simulation is a better word because, you know, the reality is we’re dealing with now is you’re still in your physical body. And that’s becoming more interesting as we move along.

Dene Grigar: So the early cyberpunk, just a I mean, Rob is really, really into this. He can add to it. But Simstim was an idea that you would. But The matrix is all about jacking in, you know, taking yourself to some, some simulated world. Right. But the simstim comes from Neuromancer. We see it also in Synners by Pat Cadigan, another one of my favorite one’s favorite writers. But they were not called VR. They were mostly seen as simulation simulated environments.

Ron Swigart: Yeah.

Frode Hegland: Yeah. And hyperreality. That’s also a bit Baudrillard. Of course. So. Peter, you need to get a headset on your head. Right. Rob, what is your experience with the vision so far? What do you think are the highs and lows? And have you had a chance to try author or reader yet?

Ron Swigart: My experience is that. It’s not ready. I find it very difficult to work with documents. It’s difficult to type. Do you have a.

Frode Hegland: Do you use a keyboard or not?

Ron Swigart: Yeah, I’ve got a, I’ve got a, I got a keyboard that’s just for the, for the VR. For the for the vision. But it’s still it’s still clumsy. I it’s probably because I don’t quite grok how to do it. But there’s no there’s not much guidance. On how to do things. I did download Reader. Into the into the headset. So I’ll get some experience with that. I can try author.

Frode Hegland: Yeah. You’ve been given a invitation link for test flight to to try it. It’s really the thing that surprises me about the environmental. Where you are issue is that I do very much agree with Dini that when you’re thinking it’s really good to be in a completely plain space, 100%, but currently, when you’re in 100% artificial environment, you don’t see your keyboard or your trackpad. So that is a major limitation. So that’s why I tend to work at least in half real space. And when I do that using the Bluetooth keyboard, the the normal Apple one, I find the lag is incredibly small. It feels natural. But it did take me a couple of weeks of this and that and getting used to it before I could actually Actually properly working there. Another thing I found is that

Dene Grigar: You’re not going to like what I just wrote for.

Frode Hegland: Oh, no, I was just going to agree with that because. No, no, I fully I fully agree with that because my recent trying to move it about, you know, in different environments, the reason I’ve been working in public is a little bit to see if I’ll talk to someone and get their perspective, but it’s also, is it basically a massive laptop screen that’s easy to carry. And what I’m finding more and more. No, you want to use this in a private space like your office and there you can speak Siri on vision is phenomenal right. So to use Siri for annotations. Yeah yeah yeah 100%. Absolutely 100%. You should be able to type as well. But yeah, absolutely. No question Danny. Fully agree with you on that.

Dene Grigar: That that means I mean this. Now it goes back to text, right? I mean, now we’re back to talking about text. So composing text orally, Rob, as opposed to writing that, now we’re shifting back and once again, we’re talking about the second orality, which we, we discussed way back at the beginning of all this. And so being able to compose on the fly with our voice in our minds, composing intellectually, that’s going to be a different shift. That’s going to be a shift in the way we function. And so, yeah, we’re when we think we’re going to be thinking different to, to quote Apple.

Ron Swigart: That’s a major shift. Cognitively, I’m not sure I still have the neurons to do that. Okay, you know.

Frode Hegland: What this might be. And so Andrew doesn’t have new code today. We’re still going to look at what he had last week and have another discussion. And. We’re going to be doing a three month review. But Denny, is it okay if we do a little bit of design talk? Because I think it’s very related to the topic because then I can show, okay, I’m going to show a really brief slideshow, then then Inability to get these things moving. Please scream if you can’t see my screen. Right. So just a few slides. So here is okay. One thing that. Is weird about being native is you have these rounded corners. It almost feels like everything is a floating iPad app. So in reader this is what you would see.

Dene Grigar: Now I actually like the rounded.

Frode Hegland: But kind of depends on. On what you’re doing. It depends on what’s behind it to me. Right. So if you tap on that, you get a bar on top in, in reader as it is now. So I’m suggesting adding a few things currently what you can do. And thank you, Fabian, for making me do this. You can have it all spread out as a huge thing with all the pages. You can also have all the pages horizontal so you know you can walk around or you can move your hands to see them. But this is what Dean talked about last week. She said she wants to make notes when she’s reading. So this is what I thought about. You have a little button thing. Add notes and it opens a thing underneath where you can write your notes. About the document, not about a page, the whole document, whether it’s connected to the document or not. I don’t know, it depends on implementation, but at least this text entry area refers to this one. The reason I wanted you guys to see the video early on is because the writing area is nicer when it’s at an angle compared to reading sometimes. So ideally these things should not be one thing, but this is an example of where voice notes should be absolutely fine, at least for parts of it which the vision already supports, right? Please note that all I’m talking about here is the native app. But the idea should overlap with what we’re doing in XR and the information shed. So first of all, what do you feel about the idea of having a text entry thing optional underneath? Makes sense right?

Dene Grigar: Yeah. And as I asked and you said it was possible is to not have it underneath like a keyboard, but to have it on the side like marginalia. You can actually have the option. Of putting it next to like for example, if I had that as a vertical. Document so that I can make notes and even link. It’s a line from the abstract to my notes.

Frode Hegland: Yeah, there are some issues with that.

Peter Wasilko: I want a command line interface. Please give me a command line interface.

Dene Grigar: No.

Frode Hegland: Not gonna happen.

Dene Grigar: Oh, so old school, Peter. So old school.

Frode Hegland: So? So something we’ve gone through before. This is just jumping back when you have a vertical document, if you have a two page spread, it slightly makes sense. If it’s one page said, you know, often the margin is different right and left page. So it gets very wonky like this. This is only when dealing with an original PDF if we’re doing the HTML or the plain text. Of course it isn’t there. But yeah, I mean, these are absolutely things to be discussed. So right. So anyway, so here’s the notion that you have some sort of way to add text, right, wherever it might be. And then you close that, of course, and then we tap again to get the thing on top. And now this is the second half of this. There’s the notion of a different kind of library view. So if you click on that forget the actual text. It’s just a screenshot from author. So just pretend it’s a name of a document. Each one. Right. The document we were in is highlighted. This works exactly like the concept map view in author, meaning that these documents would only be connected based on the comments you have written about them.

Dene Grigar: Where’s it highlighted from? I’m not seeing it.

Frode Hegland: The middle one has a bit of a line around it.

Dene Grigar: Okay. The circle, the cylinder?

Frode Hegland: Yeah. Kind of. You know, it’s it’s probably not the best. Her the best style. But so the logic is. As you know here, that’s a pretend. This is a document. And these are the comments you’ve written. Right. They will then, if any. If there are any other books or documents rather that are in your comments, they sorry any other that have the the same thing like one would be electronic literature. Let’s say if you just write the word electronic literature and then you select one of these things, you will get a line to it. It’s just one view, but as you can see on top. There can also be a timeline of all your documents. You can list them just plain list by titles, because depending on use, you want to have a different view and author, which is essentially the same thing. So that was the key thing I wanted to show today. And then it went back to, you know, what do we really need? And what we really need is better ways to read. And that’s really the discussions we’ve had quite a while ago, where at least the abstract should be super clean and easy to read, and not hidden in a small ACM kind of a document. So we really are at a stage where HTML or plain text presented better is important than if we have them all as floating things, which is what Andrew is working on now how we choose to interact with them and how we choose to kind of hide them or we don’t need them, will be interesting. So hang.

Dene Grigar: On. What are your thoughts, Andrew? What are your thoughts on that?

Andrew Thompson: Well, you’re talking specifically reader and author right now, so I wouldn’t have control specifically. It makes sense, like for the the margin and base. If we want to carry something like that over. I know we talked about sticky notes early on. From Rob’s suggestion which I haven’t forgotten about. Still on my list of things to someday get to. So I think that would be a a similar integration for ours.

Ron Swigart: And.

Frode Hegland: So the notion here is where can we most cheaply and effectively experiment with different things? The the I am doing. The some experiments in native, but the idea is really as quickly as possible. You should be able to do these views in the webXR and a lot of the components I feel that you have. So some of the views at the end there, ideally, you would be able to click a button and read or associated software and open it up in our webXR view. Because they’re.

Andrew Thompson: So you’re saying you want to have. The entire interface able to be exported into. Our webXR one.

Frode Hegland: No, no, no, what I’m saying is that Actually Fabian, have you had a chance to look at what Andrew did last week?

Fabien Bentou: It’s which was.

Frode Hegland: I’m going to give you a link because under this will help answer your question when we look at that together. I’m just going to pop in a link here. And please, for those of you who have the opportunity, please have a look at it in a headset. The actually, I’ll give you the direct link. So this is the latest one, and then I will address the question. Sorry, I have to step away for a few seconds while that happens. I thought you guys were all going to enter the headset to look at the code. Fabian, do you want to have a look?

Fabien Bentou: I can’t have a look. Wait. No, I can’t have a look now. Yes, sure. But I saw the video already. Yeah. It’s selecting or extracting references from the bibliography. Roughly.

Frode Hegland: Yeah. What we have so far is a view in a Uranus cylinder. And you have a primarily the main interaction item is one reference where you can move around, scale it, and you can choose to extract specific items or not. So the question is what should you be able to do in there. And that was what I was going on a little bit about with the. With the system here. For instance, if you do that typing. Notes about the PDF, you should be able to do it in reader or in the environment, and it should store it in the same way. So depending on how flat you want it or how dimensional you want it you should have the options.

Andrew Thompson: It’s it’s the storing it in the same way bit that gets more complicated because once again, you have an app that can do a lot more than webXR can. So if you want like bit like the data to be the same, like the text content, that’s probably possible. But it of course won’t have the same functionality overall.

Frode Hegland: But I’m not looking for the same functionality. So one of the items on our agenda today is to do a little bit of a review of where we are. So I’ve been looking well, Dean and I, in our meeting, we were looking at the original Sloan pitch. And end there. It does state that the user should have access to their library, and this software should work with the external software author and reader. So what I mean is essentially use visual media to communicate. And that can absolutely, as you said in the beginning of this call, Andrew, be visual meta in the form of JSON. So I’m not talking about the same functionality exactly, but I’m saying that the last views that I showed, like the big window thing that probably suits your world, so to speak, better. So we need to look at how to. Go in and out of that. I better.

Peter Wasilko: Yeah, I think it would really help if we had a way to texturally represent what’s happening gesturally inside of the interface. So then we’d be able to mock user interactions with replayable text and ideally record what someone’s doing physically in the environment so that that again, could be replayed later on, saved off and stored. I’m not quite sure how we do it. I don’t know whether anyone’s done a language for it yet, but. And something maybe along the lines of the picture language from the text describing things in abstract terms, you know, two thirds to the left the little superscript from Blade Runner offers a couple ideas there, but we really just need somehow, and I’m not sure what the best way. Maybe Andrew would have some ideas of textually representing what’s going on, so that people without a headset could again type up a script as to what would be happening if they had the hardware on, and then play that through a simulator, see what’s going on if they made those interactions. The again idea why I have in mind here. I’m not sure how well I’m articulating it.

Frode Hegland: Yeah, we have talked about Sharing things. By the way you can see my screen right.

Ron Swigart: Oh.

Frode Hegland: But obviously it’s better to say it properly that we have these interactions where we can move this around, which is kind of amazing actually. The rendering quality is not bad, but of course it isn’t the. So we have, you know, you can detach. Of this one around. No, I kind of lost it. And then we have options for things like. Find a document is quite powerful. There. It shows this From the document. So so that’s what I mean. We have these things. So it’s a matter of being able to go from one environment as to this, into this environment. So here you can see The PDF and reader. And currently doesn’t have anything useful on top here, but I could imagine a button in there that says, in addition to what we just looked at, open up an webXR and at least takes the metadata of this document into our webXR environment. And they need to answer your other question. So this is how I had it. I had the writing below, but I can always. To balance.

Dene Grigar: Yeah.

Speaker5: Yeah. Sorry.

Dene Grigar: Martinelli is a really important part of. Writing on text write has been. We may move away from that one day, but people still write in their text.

Frode Hegland: Yeah. I’m not sure if we’ll be writing on the document itself, though, because I think computationally, so to speak, it’s better to have it off. Because once you’re starting with layers. Because my feeling is that it becomes quite complicated. That the idea of being able to select texts. So selecting with the eyes. It’s just really odd. Let’s try to do this.

Dene Grigar: May I say something else? Yeah. Of course. The other thing is that we’re talking about in the case studies is what we’re you know, the first part of the case studies is what do I do now? Right? That’s why I want to write the case studies in two pieces. How do I read text and understand text today? But I think a better question once that’s completed, is what is a new way of doing this? What other ways can we do this in this new environment? We don’t have to have a breadcrumb. We don’t have to do the same activities in these two places. They can be something new, right? But we can’t really tease out the new till we know to establish the practices, the current practices. So I’m interested in being able to do the things I do right now, but not really. I mean, I’ve always been very interested in doing something totally different. That’s what’s driving me to this project. I mean, I don’t want to use XR to do the things I’m already doing. I want to do things differently.

Frode Hegland: No, you don’t want to do it different.

Dene Grigar: Different.

Frode Hegland: Do different.

Dene Grigar: You’re different. But but but but but we have to go through a process. It’s a methodical process of working through the problem. And so step one how do I use text now. You know what’s possible in this environment like that. Okay, now let’s leave that behind. What’s possible? A new way of looking at things.

Frode Hegland: So. Yeah, I mean, that’s why of course, the the case studies are important because the case studies well, of course, partly be a little bit about how things are done now, but primarily they should be about what is the actual results needed. And we have decided that we expect the data and metadata in the system we’re building to be perfect, because it is possible to do that. It doesn’t exist. Perfect. But we can make that assumption. So then the question is what interactions we can have. And, you know, reading back at our pitch, it is quite clear to me that we also need to look at reading good old fashioned reading. And you know what the so one thing obviously also to think about is I and I will not be core to this work. So doing large analysis of large documents and stuff is a bit of a sorry, Fabian, please go ahead.

Speaker5: I

Fabien Bentou: In terms of what’s feasible, and I think it’s a rather low hanging fruit from the demo number five that I just tried. Is exporting the extracted citations or not citation, but references to Mendeley or whatever.

Frode Hegland: I have to go downstairs. I think the plumber might have arrived. I’m really sorry.

Speaker5: Yeah, I.

Fabien Bentou: Don’t think the plumber knows Mendelian citation reference, but that would be good. But otherwise, because the thing is, once you’ve honestly once you’ve bothered, quote unquote, moving those reference out then what? So you know that you can move them and it’s good. But I think if you can export to a known format just using Mendeley as an example, the whatever bibliographical reference one use with the format that is open and you press a big huge button that says save and that it’s a URL or whatever, you get out and you can use it on your good old desktop. After removing the headset, I think it’s something it’s not super exciting in terms of, oh, is it like a generally I mean, it’s a, it’s a usable I would say in in academic workflow, like, okay, you, you have a new set of friends. Friends, maybe they are even ordered a certain way. And maybe it gave you an idea to write a new paper based on those reference, because it’s a new idea. So I would say it’s a, it’s a low, I think low hanging fruit to get done. And it’s again, part of a workflow that people that would use such a tool should be, I believe, interested in so that that’s what I would do and without much more thought than this. Like maybe it is pointless, but it’s like hooked on the traditional way. And then I would get personally more motivated learning other gesture or importing my own data, because I know that the work I’ll have done there will work outside of the headset.

Frode Hegland: I heard halfway that I had to run. Can you please repeat the intro?

Fabien Bentou: So basically export to Mendeley or whatever bibliographical tool you use because right now you get the cost, quote unquote, of learning your gesture. It is not perfect the way you maybe it’s just the way that you learn it that is also not perfect, honestly, the implementation. But the point is, it’s challenging and you don’t get the which is fine, but at least you need to get the reward. And the reward is those citation or those references rather ordered in the way that you move them in space, and that you can share with a colleague or with yourself. Back to the more traditional way of the desktop. Let’s say that that would be what I would do as a low hanging fruit from demo number five.

Frode Hegland: Yeah. We expect to have full references in HTML or something. For our working document series from the ACM Hypertext Conference. Mark has gone through and cleaned that up so that we have and it is also available through visual media, of course, at the back of the document. And does that. I mean, because these Mendeley and so on, they have their own databases, but it is quite proprietary and kind of secret sauce.

Peter Wasilko: So everyone uses tech as a standard interchange format, so there’s no trouble in moving them between different reference managers. Also, if we have numbered citations, it might actually be easier if we were keying in the citation numbers textually, rather than having to angle our hand to send a raycast beam out to intersect with the citation reference on the page. And I just like, glance. My eyes would be on the number pad and I’d be hitting 23, as opposed to trying to line up with reference 23 in the big long list of references, and then twitch my fingers at the right moment, but I haven’t tried it on yet, so that’s purely speculative.

Frode Hegland: Selecting is really not a problem. It’s getting better and better with every iteration. Yeah. Peter, you really need to find a headset to try somehow. It’s I’m sorry that New York has become so horrible that the Apple Store is not really accessible, but it is. It is a different thing. And even for those who have a quest, one that the world is completely different.

Speaker5: Right.

Frode Hegland: So assuming that we have perfect data. Through whatever means. Then the next question becomes how we should try to go about interacting with it because Hang on. I’m just going to scroll to our thing here. So what we have in the Sloan thing that we kind of need to stick to is.

Frode Hegland: So yeah, our end goal for the software development is to allow a user to put on an XR headset, access their PDF library, and this is really important. It has to access their PDF library through whatever means we decide it’s going to have to exist. Read and interact with documents in XR and traditional systems and export their work in traditional, useful formats. This workflow will be possible because of the integration we have with author and reader.

Speaker5: But

Frode Hegland: So I’m very, very happy to work with Andrew on getting the right kind of JSON from the reader library. Up, up into the headset. Especially now that we have reader invasion as well. Of course, we don’t have it for quest, but ideally that shouldn’t make any difference. It should still work. And so target users are scholars, including university students performing a literature review for a paper they’re writing. That’s also very, very important. So then we just have a few more things. On first use a view of the users library of PDF documents will appear, and then we have interactions with A that were just given a list. We don’t have to follow this exactly, but it’s a full document. Components of the document. Multi documents. External documents and so on. So we should at least really look at how to make it super readable, right? So now that we have both Fabian and Andrew here, I’m just going to I’m just going to ask in order to have a, a user have a bulk of documents on their computer Mac, windows, Linux, whatever, and they provide some means through which that is online. What is the mechanism through which the metadata of those documents, plus the metadata of the library itself, meaning primarily which documents are favorited and stuff should be transmitted up. I’d like to hear that I have to do something behind the computer. I spelled something, but I can’t hear you. I’m still in the room.

Andrew Thompson: Yeah, I mean, like, as a bare minimum, that’s just essentially a list, just an array of objects. Jason works great for that. You have a list of URLs, and then since it’s JSON, you could easily add some kind of tag that just says it’s a favorite or whatnot. That’s not complicated. And that is in line with our original idea of how we would handle libraries, because we’d be able to import libraries from author reader. But also it means we could prepare pre-made libraries that come packaged with this project we’re working on. So even if the user hasn’t yet created their own library, they can work with, say, the ACM papers or any other library we put together beforehand. So that shouldn’t be that complicated.

Speaker5: Yeah, absolutely.

Frode Hegland: Also, looking at Fabian’s comment at the same time. Yeah. I mean, don’t forget, guys, visual matter is a container for anything. The self-citation is BibTeX. So that is completely the same thought. All visual-meta.

Frode Hegland: Does is basically function as a prompt saying to reading systems. The following is this and that. So yeah. Bibtex from Zotero. Yeah. Makes complete sense. It’s technically exactly the same. So when you talk about a list under. How do you want that list to look? What should I tell my guys to build, and how should it be transmitted?

Andrew Thompson: I mean, just it could just be Jason. It doesn’t need the content. It just needs sources. And the sources need to be links. Like paths. Right? Everything that we’ve been working with so far in our XR project has been links online so we can access them. Technically speaking, if you have them on the machine itself, relative links will be fine. So if say reader has knows where the the different documents are on the machine then you can do a relative path. You will not be able to share that library. If it’s relative. So you’d have to have another tag. Somewhere on that library that says it’s it’s just a private library. But as a bare minimum, that seems to be all you’d need. Of course, the downside with all of this is that we’ve shifted our design in the direction of working with the HTML exports of PDFs. And if we go back to actual PDFs we kind of have to scrap a lot of the functionality we have right now, which is, okay, we can do that. I.

Frode Hegland: Don’t think we need to, to change that. I’m totally happy with working really hard or making sure that the HTML can be rendered in a useful fashion. So, Danny, I’m not sure if you agree, but I don’t think we necessarily have to have the PDFs as such. What do you think? Looking for a screenshot from earlier.

Andrew Thompson: The main reason why we’re using the HTML versus just a pure PDF is just because the HTML has tagging. So we have everything labeled. So if we need to find where the citations are, we can just go to the citations as opposed to PDFs, which are essentially just blocks of text without any real descriptors of what things are. And they’re all formatted inconsistently, which is also a real pain.

Frode Hegland: Yeah, exactly. So what I’m sharing on the screen now is one of our earlier discussions. This is a document and we’re using the HTML as the source for it, not the PDF. So that means that in many ways to to have it readable. But right now you can click on a section and you read only that section. So, Danny, what do you feel? Do we have a strong requirement that the actual PDF has to be in the webXR, or that the content has to be there? Nicely formulated. Excuse me. Formatted from HTML.

Dene Grigar: We promise. Here’s what we promise, Sloane. So I think it’s very possible we didn’t say we’re going to use PDFs. We just said we were going to experiment with text open source in this in this XR environment. Right. It’s with XR. So. And this is what Mark Anderson has been, you know, nagging us about. Right? And I’m saying that in a loving way. He doesn’t want us to be stuck on PDFs. So and I agree with him. We should be able to look at different types of textualities PDF being the easiest one because that’s what we have in our library. So it goes back to what can what what do we hold in our libraries? Do I hold web pages? Well, yeah, I do have, you know, tabs, bookmarks for web pages in my library. Do I have documents? So it’s not just one thing or the other. It could be both. And by the way, I have a question. I’ve been looking at the last build we had, and I can’t get out of it now.

Speaker5: Well, just.

Frode Hegland: Click on the click your app button. Your digital.

Dene Grigar: I did.

Frode Hegland: Oh it didn’t go out.

Dene Grigar: Taking me back. It’s like it’s stuck. I’m stuck. I keep clicking and it keeps coming back.

Andrew Thompson: There’s no specific exit that I’ve built in yet. There should be like a button at the top of the headset that exits the current app, which is what you should use. If I understand correctly.

Dene Grigar: Yeah, but I’m. I got that. But we should have some sort of way to get out of that document, right, without having to click the headset.

Andrew Thompson: I suppose we could put, like, an exit button of some sort in the What happens when you move the prison menu?

Frode Hegland: What happens when you press the Digital Crown?

Dene Grigar: It did this time. Last time it took me to taking a photograph or a video.

Frode Hegland: Oh, that’s the left hand button. The right hand button? Are you sure that.

Speaker5: Yeah.

Dene Grigar: Right here. Right.

Speaker5: Really?

Dene Grigar: Okay. Now but but I think for, for usability purposes, we should. This is like using a back button on a web page, right? It’s better to have something on the page so that people don’t use the back button.

Frode Hegland: Yeah, but the standard to leave for the most of you is to press the Digital Crown.

Speaker5: But okay.

Dene Grigar: Just saying. So I’m going to try to pick up your other your other app now from. Testflight.

Speaker5: Thank you.

Frode Hegland: Yeah. Peter thank you for your patience.

Peter Wasilko: Yeah. Instead of passing paths around, I’d like to see us do more with content addressable memory and just have a hash of the substance of the document, as opposed to worrying about where it’s located. Then if I got a copy from ACM and someone got a copy from I triple E, they’d still have the same content hash, and then we’d be able to recognize that as being the same document. And we might want to do that at more than one level. We could have a hash for the extracted text from whatever file format the document is in, and we could have a hash for the documents canonical name, and we could have a hash for the document in PDF format. One for in HTML format, so that we’d be able to find things without having to worry about where they’re located and to save even more bandwidth. If we had a designated server or federation of servers that could resolve. Mnemonic identifiers to the actual substantive hash values. That would be even easier, though. We could have our purple grouse as being the key which the federated hash mapping server would then convert to the hash lookup ID for the content of whatever document I had mapped to Purple Grouse. Then the system would be able to pull that in from wherever it was, whether it was a local copy, an email that someone had sent me five years ago recognize that as being the same document that I meant when I attached the designator.

Frode Hegland: Peter. Worthwhile thoughts, but I think it’s a little out of scope for what we can do now. Now it’s just a matter of your local library in the headset. But we should definitely have that as a wider discussion. Sorry. Sorry, Peter, that that gets into a lot of technical layers. Fabian also.

Fabien Bentou: So I put it a couple of links in the chat on the Zotero, because, yes, I mentioned Mendeley because I believe it’s quite popular. But Zotero does roughly the same as far as I can as far as I know, but it’s open source. So namely that if I want to build Zotero now, I can I don’t want to, but I can. And I installed it on my desktop because I used it a little while ago, actually and I reinstalled it as a test of how. Is it would be to import, let’s say, the result of reference that would have been moved after an XOR session. And yes, BibTeX comes first. There is also Zotero RDF, I imagine where I remember and a couple of others. There is also a JSON one which I did not know existed called CSV, JSON. But all this or just example again in term of what’s the low hanging fruit of moving those around and then getting the result out of the headset? But I think it basically flattens it down. Namely, what you have at the end is a list of ordered references. Which is a good step, but arguably maybe when you want to go back to it, you also don’t lose where they are.

Fabien Bentou: Namely that if you put one a little bit more to the left, even though when you flatten it down like the one that is on top on the, let’s say y coordinate if it’s above the others, it’s number one, if it’s second, it’s number two, etc. it’s rather direct, but maybe you put one on the one that is on the left there, because that’s where you put reference related to computer science and the one on the right a bit more on, I don’t know sociology or whatever. And thus, if you could hijack the format to usually there is a comment section, say, hey, I use CSV, JSON or text, and then I use the extra field that Zotero is going to ignore. Ideally not destroy. But then you can still keep or even like metadata like, oh, I moved that reference on Wednesday at 5 p.m.. So my point being that exporting to a well-known format that academics can rely on when they do their work without ideally losing the new information that was added that is specific to X or basically x, y, z coordinates and maybe time of movement or something like.

Frode Hegland: So on that. And thank you. We have to make some decisions, and today is the day to make the decisions. Denny, I know you. You’re in your own world now, but of course you can hear me. So the question is for what we’re trying to do before the summer. How important do we think it is that the rendered PDF should be in XHR? We do not think that’s important, right? We think the data should be there.

Speaker5: Yeah.

Dene Grigar: Yeah.

Speaker5: Right. Okay. Yeah.

Frode Hegland: So that means that we officially now give up on trying to get the PDF into the XHR. The PDF can be in reader or whatever software. So that means that from what Fabian is saying, we now have to decide on what metadata from the PDF should go to the webXR and how. So considering we’re now in the metadata world, basically plain text, which will hopefully be sourced from the HTML version. So we don’t have weird issues, we have to decide how what should be readable. Peter, I’m seeing what you’re writing here in the comments. But the thing is. Native and webXR have very, very different pros and cons, and we really now need to play to the strengths of both. So when you’re in what Andrew has built, the resolution for text is considerably lower. Webxr doesn’t support native resolution, so reading is suboptimal in that environment. What is optimal is seeing relationships. So the key thing we’ve already started with the references as Fabian was talking about the means to deal with is fantastic. So to have a huge space to see how things relate is probably where we want to go. Not just having a a reference to exact part of the document, because you’ll do that elsewhere. Denny, please take over. I’m going on forever.

Dene Grigar: So I want to know what Andrew has to say about this. In six months, I have to turn in a report, and the report tells the Sloan folks what we have done that we promised within those six months, did we make everything we promised, and if not, why? And if we did, beyond that, what do we do? So that’s that’s coming up in three months. Right. So this is really important to have this three month talk because we’re moving towards the next piece of this before the first formal documents do. So I want to turn this to Andrew and say to you, you’ve been working with with PDFs, which is something we did we did Promise Sloan. Right? We did say we’re going to work with PDFs, but we also said we’re going to work with documents in general. So we’re not limited to PDFs. How much would this move towards other types of docs, like HTML? Hinder you or help you?

Andrew Thompson: No, we’ve we’ve made the switch off of just rendering straight PDFs, at least for a while with these last last month of development. It’s much better rendering a PDF as is inside the headset was, frankly, terrible because of the resolution issues. So now just focusing on bits of content, which is easiest to get from the HTML versions of the PDFs it’s given us a lot crisper text. I know it’s only been citations so far. The goal is obviously to move beyond that. It’s just been the easiest place to prototype out bass interactions and whatnot. So I think the route we’re going down now makes a lot of sense. But of course I can adapt to whatever. Ultimately, I’m just here for implementation, and the rest of the group is behind the decision making. So as long as it’s possible, I’m on board.

Dene Grigar: Shut up, shut up. That’s not true. I mean, making is not separate from thinking, right? That’s the hallmark of our programs. That’s our mantra. Right? So shut up. But I will say that then we want to say at this Mark at this, at this demarcation line, Froda we came to the conclusion in the expiration that we’ve been doing for three months that PDFs do not make good documents within the webXR environment. We’re going to move towards another type of document that might provide a more effective. Way of interacting and reading text. And that’s going to be with HTML and other types. So we can actually say that in our in our document. Thank you. Thank you Andrew. That’s that’s very helpful. And I know we’re not recording this, so I do want to get that on record.

Frode Hegland: We are recording this but we’re not making these recordings public okay. All right. And I will put this in our notes and we will agree on the notes before they become public. Of course.

Speaker5: Okay.

Frode Hegland: Yeah. I just want to also say I agree with that PDF stays outside of webXR. Webxr is where we do relationships and interactions. Fabian, please.

Fabien Bentou: So I would argue that for the intended audience namely academics and people managing the project, I think it’s I would feel safe or comfortable saying that right now the best platform to develop for WebEx are which pains me a bit, but the truth is, in term of features is the meta quest just for the meta aspect. They, they do really a good job in terms of the browser. It doesn’t do everything we want, but it does a lot including a pass through and this kind of things in term of resolution. And like I mentioned initially, especially in the context of text, of course, if you have a Vision Pro, you want to use this, but as somebody who would read and try to learn from such an effort as this project, I think people are smart enough to say, hey, imagine if you had the hardware or the Vision Pro inside the software featured in the meta quest, and thus right now we do this way, but we imagine with a certain level of confidence that in one year down the line, five years down the line, it’s going to be the best of both worlds. The hardware will match. And okay, right now we know that PDF proper looks amazing in the browser of the Vision Pro, but it’s not what we can work with efficiently. But yeah, I think people can. That’s the point. Also, of the prototypes, in my opinion, it’s like they helped you to get there, even though it’s not either easily available or even feasible right now. But you you do the the you help unlock those ideas in people’s minds. So I think people who would read or experience such kind of documents would be smart enough to at least understand it’s a viable bet in my.

Speaker5: I’m perfectly happy with.

Frode Hegland: The webXR being primary target targets. On the the quest that makes complete sense. We are learning now what the different benefits are. The vision is a Rolls Royce, but Rolls Royce isn’t useful over other vehicles in specific areas, so that makes sense. So now what we need to decide on is what data should go to the webXR. And how should we get it there. We definitely want all the extractable metadata, the reference section, the abstract, if it’s available. Who wrote the documents. And we need to look at the, the, the other about what the interactions we want to have in this space. Right. But actually because this been going on for months now, let’s say that we have a full definition of the metadata. And a user now puts on what Andrew’s working on. And there is some kind of a dialog that says, welcome to the future of Texlab. Where is your data? What should they do? Let’s be really meat and potatoes or whatever. Should they point to a link where their documents are? Should they point to something like reader that sends it out? What? What do we want?

Andrew Thompson: I can picture it as we have several libraries available for users just to dive into if they just want to see functionality. But if they want to see their own, there’s a button that lets them upload their own library. And that is just a JSON file. Uploading the JSON file would be really easy in a browser. Most likely exporting the JSON would be done through reader or author. That’s an assumption I would make.

Frode Hegland: Okay, so that’s fine. I agree with all of that. Dina you too. Right?

Speaker5: Yes. Yeah.

Frode Hegland: So let’s.

Speaker5: Say no.

Dene Grigar: So I’m taking notes so I won’t forget what I was thinking earlier.

Speaker5: Okay.

Frode Hegland: Oh, by the way, I’ll share the secret recording of this with any of you if you want it. It’s just. It’s upload. Unlisted. That’s all.

Dene Grigar: So I’m going to recommend just quickly we’re going to want this one because this is going to help me base the the the report.

Frode Hegland: I can make this one public. We haven’t said anything controversial. Should I make it public. Anyone have any issues with that?

Dene Grigar: I would not do that because we promised Brandel we wouldn’t do it. So just give us access. Give us in the room access.

Frode Hegland: None of this is Brandel related or anybody else. And he hasn’t said anything to us. That would be an issue. It’s only when he’s talking. He’s been so careful anyway. Okay, I’ll just share it. Share it with you. But it’d.

Speaker5: Be great. Okay. Thank you. So.

Frode Hegland: Okay, so here’s the thing we’re going to do. What? We have Mark back in the room. Unfortunately, he couldn’t make it today. He’s going to provide as he kind of already has, a beautiful thing, which is click and you get a same document. Metadata. That’s a B. Someone is in reader. They have done things such as. And this is, I think, quite important. There have been selecting texts in a document and all those highlights and selected text and highlight all of those will be in the metadata.

Speaker5: Or so clearly.

Frode Hegland: Useful to the reader. They will also maybe have written stuff that we looked at. I’ll see if I can implement that. So not only the native metadata, but this stuff too will be accessible to. Ftl webXR to call it that, right? So what should reader do to be able to shoot that to webXR? That should be something they ideally set up once, and then the headset and the web and reader should just talk to each other. Just read, or maybe have a local web server or something that only shoots. Json or something.

Dene Grigar: We talked about this at the beginning of our project, about a local server. So that I mean, I think it’s fine for us to do that, but I can’t imagine asking, I don’t know, my colleague Sue. Sue Peabody from history to set such a thing up. So it’s fine for us, but it’s not something that’s a long term solution, right?

Andrew Thompson: I might not be seeing why this is needed, but I’m just picturing a library export not being a super frequent thing like you. Just you send all of your stuff over. Oh, no. No. Are you picturing, like, a constant stream of data?

Frode Hegland: Not not constant, but the idea is envision. I’m in reader. I’m doing some highlights. I’m writing some notes under it. Oh, no, no, I want to go webXR now. Right. So at that point, ideally because there will be updates.

Andrew Thompson: Wait, do you want to send all of those notes and highlights into the XHR mode as well? Is that what you’re getting at? Yes okay I see so that kind of dramatically changes everything. It’s still possible, but

Speaker5: Okay, let me just explain.

Frode Hegland: So what I already have in reader, when you highlight text in a document, reader knows about that. It keeps it in a database so that when you search your library, it’ll also search for highlighted text. If the example I love to use, just to make it really clear, you read a document, there’s the word chocolate. You select and highlight the word chocolate, even if it isn’t the title of the document or anything. When you’re in your library, you type command F chocolate. It’ll find that for you. This is something that reference managers don’t do, which I think is completely absurd. And so that data is crucial because it is where the user has said this is important to them. And we already know that. So I can tell the guys to produce a JSON that has all the basic stuff, plus this stuff in a format that you would be comfortable with. Andrew. The only complication is that it would be. At runtime when you first put the headset on, not during the session, but to prepare for your session.

Andrew Thompson: Yeah, if you want to send that much stuff. That’s fine. I can’t promise that we’ll get everything in XR working the same way, because that’s just a ton of development time, essentially just copying implementation that’s already been done in a different software. So we’ll have less. New stuff, less of the sort of interactions that we were studying, but we would be able to have more carryover. It’s just like budgeting time. We just have to decide what priorities are. It’s definitely possible. And if we want to leave it as an option you’ll want to have as much of your potential data saved in this library as possible. So we don’t have your developers have to keep adding stuff. And if I can’t get to all of it, it’s not really that big of a deal. But we still have the option to try for it. And like, I would prioritize whatever you guys want me to, but I’m just, like, laying it out there. I’m the only one working on it, so.

Frode Hegland: Oh, absolutely. That’s crucial. It’s just that in the writing it says Our goal for the software development is to allow user to put on an XR headset, access their PDF library, read and interact with documents in XR and traditional systems. So we are really saying it is their data. So. So what is the most efficient way to shoot a JSON up there? And by the way, when I’m talking about a local server, it would be hidden inside reader. Once the user has set up a few little things, it should just work. But I obviously agree with Danny. It’s not. It’s not a nothing.

Dene Grigar: Nothing is nothing.

Andrew Thompson: I could maybe see it working if You basically load the web page for this XHR mode with like a data in the URL that links to the JSON export that author made. So like in say, author or reader, whichever one you click like send to XHR or something, and it grabs all of the data it needs, writes a JSON file, saves the JSON file to your computer in some temporary folder. Then opens the link to the XHR experience. But the link at the end has the relative path appended. So you then get that that won’t exceed the limit because you’re not sending the data. You’re sending a link to the data. And once I load it into XR, it sees that there’s a path appended there to the link. And then just. Visits the path and grabs the file. I’d say that’s probably the easiest way to do it. No servers needed.

Frode Hegland: I kind of understood that. I’m going to ask a tangential is okay so. It is possible for any application to send a URL to a browser. Of course. Send a link. Right? Can that link? Open something. Now, of course you can open something externally, but. Well.

Speaker5: We found the.

Andrew Thompson: The browser may try to block this because it’s accessing files, but if you allow permissions I think it should work. We have to do some testing.

Frode Hegland: Yeah. I mean, isn’t there a way where all this metadata can be included at the end of a URL?

Andrew Thompson: It’ll exceed the. Character limit real fast.

Fabien Bentou: I honestly, I don’t think it’s a I understand the all the reliability and avoiding redirection and all this like that’s the proper way in a prototyping environment where we test things that are not those. I would skip that. Like if the URL redirects to a server even installed on my grandmother basement. And it’s clear, like it’s not cheating and it redirects to the file and the metadata and a ton of stuff, and it makes everybody’s life easier. And the demo more understandable in my opinion, it’s it’s an. Oh.

Speaker5: What were you.

Frode Hegland: Saying? Don’t use real data. Don’t use. Just have preview data.

Frode Hegland: You guys have to treat me like a three year old. Right now. I’m quite lost as to the subtleties of this.

Speaker10: Was the.

Fabien Bentou: Question for me.

Speaker5: Yeah, yeah, from.

Frode Hegland: What you just said. Now what what was the.

Fabien Bentou: I’m saying get get a URL and that URL redirects. So it’s a short, it’s a normal URL and it redirects to either the file itself or the file with the set of JSON for more data. Again, it’s it’s not. As soon as you have a redirection, you assume the server will exist and it will work properly. So it makes it arguably less reliable and you need to be online, etc.. But I think in that context of exploring novel interaction with for academic, I don’t want to say reading, but interaction with academic materials, that’s an okay assumption. I would assume most academics have internet access, and the one that will use XR will definitely have internet access. And so saying, oh, the server is in reader or in glitch or on the headset itself, I understand again your concern in terms of will it hold in five years, but I think the project is not five years long. And I would say if it holds for one day, the day of the demo, and there is a big asterisk that doesn’t hide that fact that it is, let’s say, a point of risk that a server is needed and it can be a free glitch.com server or whatever. I think that’s okay. I would say it doesn’t hinder the testing of novel ways to interact with academic documents in XR.

Dene Grigar: Let me respond to that. Fabian, that’s a great point. We don’t have five years. We have two, but we have the option of re-upping this grant. So we want to get to a point in the two year period where we can say to a Sloan, here’s where we are and here’s what we want to head next and ask for another three years. Right. And they’ll be more apt to give us three years after two year success. So I so what I’m saying is it’s always good to plan for the big picture. Okay, and then set up the step by step to get there so that the sphere point, this is what we’re delivering and this is what we’re going to ask for next. So thank you for that.

Frode Hegland: So one thing we can do, because from the very beginning, we talked about the middle layer of how to get stuff up being many options. I do have a full server I can do whatever I want with. So if we get some basic server stuff that basically is a JSON redirect. That’s fine. Andrew, you can have full access to that. We can put whatever software we want on there.

Andrew Thompson: Yeah, I mean that’d be super easy if you just it’s the same thing I was just suggesting. But you don’t have to worry about local files. If author just uploads to that server and then the XHR downloads from that server. That’s pretty easy.

Frode Hegland: So if you could write a paragraph of Or if ideally if you could even provide us a sample JSON just with a few things and write how you want to learn where that is, because. You know, now we’re dealing with the basic user stuff. So you okay. So Dini installs reader on her headset.

Speaker5: Or her.

Frode Hegland: Or her computer, and she then tells this reader software that she.

Speaker5: Has.

Frode Hegland: Access to this future text webXR software.

Speaker5: That software will.

Frode Hegland: Already know the software, that our web stuff already exists. But how? What is the process of telling the system so that once she puts the headset on, she gets her data, not someone else’s? Sorry for being very convoluted over the most basic thing there.

Andrew Thompson: I mean, we’d probably just this would be something that would be handled by say, reader. Just when you upload, so you export the JSON you up and it uploads to the server you would create some kind of ID or like a hash or something that represents. That upload and it would be just it’d be randomized and made sure that it’s it’s unique. That’s not that complicated. And then it’s just sits on the server with all the others, with its unique ID, and that ID is passed over on the link to the XR version. So then I know where to go and find it.

Frode Hegland: Okay. So to to say that back. And we could maybe just use the user’s email address because that’s going to be unique, right?

Andrew Thompson: Yeah, but then you’d only be able to ever do one upload.

Speaker5: No no, no.

Frode Hegland: But to use that and then you have. Okay. So in reader user gives their email address and clicks a button saying I have webXR stuff. The webXR headset, first time you use it you say, well who are you? Here’s my email address. The webXR headset then goes to the server. To see if there is anything new.

Speaker5: And takes it up.

Frode Hegland: Wouldn’t that work?

Andrew Thompson: Oh, you just want it to only ever grab the one library over and over again. I guess you could.

Speaker5: Yeah, the.

Frode Hegland: Initial library, of course, if the user chooses in the future to have more than one library, we can get into that. But I think that’s a complication we don’t need to worry about now.

Andrew Thompson: Yeah, you’re right. Probably beyond what we need to do right now. Yeah, that should work. Now. Yeah. Having the email like just appended to the link can totally work. That’s not that complicated, actually. I guess the file would just be saved with the email as the name most likely.

Frode Hegland: Just replying to Rob here. Please repeat. Andrew.

Andrew Thompson: No, I’ll just kind of like vocalizing what we were just talking about where the the link would now just have the. The webXR link would just have the email address appended to the end when it gets opened through reader. And then the file name on the server would just be the email address, like that’s the name of the file. And that probably would work pretty well.

Frode Hegland: Okay, so I like what I’m hearing. I like this idea. So what you’re saying is that. Once you’re when you’re in reader, it asks for your email address. You put it in. And the email address is that sent to our server, which creates an account with that name, that folder name, so to speak. User then Clicks, a button and reader. And a link is generated that at the end of it, has their email address, so that when the webXR is launched through the link, it uses that information to find the the folder. Is that right?

Andrew Thompson: Yeah, pretty much. I don’t think it would be a folder. It most likely just be a single file. But yeah, it depends on how many different uploads you need.

Frode Hegland: Yeah. Okay.

Andrew Thompson: That’s. Yeah. We could do a folder for Just so we can build off of it if we ever need to. That might be smart. But for this project, we’ll probably only ever have one library per email address. And for testing, you could just put in a fake email address to generate another one.

Frode Hegland: Looking at what? If I could explain what that is.

Fabien Bentou: Yeah, sure. Just as an example, I imported one document or a URL in Zotero from a nature page. Exported as a library. So it’s the csv JSON format. I mentioned and I saved it on a WebDAV server which allows not just to get that file. If you click on it, it’s just a web page and you grab it. Fine. But as WebDAV you can also write it back assuming you have the right credential straight from the browser. You don’t need to. So it’s like a file system, a remote file system, kind of. So it’s still a real file. It’s still something that can be exchanged. So if somebody else use it and they don’t have the, the right they can just read it. And if I, if it’s saved properly. So again as CSL, JSON then Zotero should be able to open that modified library, for example changing removing reference or change the order and, and just use it as normal. I just use Zotero as an example because that’s it worked. And it’s relatively popular and it’s open source, namely that if somebody wanted to make a open in WebEx or page straight up as a Zotero, not even plugin, but menu and open, let’s say De-link page or whatever, to go back and forth from the headset. I imagine that would be relatively straightforward.

Frode Hegland: So I think this is great. What I think I understand is that what should happen is that I get my guys to set up a WebDAV server on the Future Text Lab websites.

Speaker5: With the.

Frode Hegland: System that Andrew is aware of. When a user leaves reader to go to the headset, a link is sent to the. So the browser in the headset that includes two things. It includes their email address as their ID, and it includes.

Speaker5: Something.

Frode Hegland: That references the WebDAV because the WebDAV can change over time, and the webXR software knows that the reference for WebDAV is temporary, so it’ll look up what it knows to be the WebDAV. So the URL would look something like Future Texlab info slash webXR slash client server, because that would be the name that will be used as a thing token. That will be exchange slash Dene Grigar at icloud.com or whatever it is that will then make sure that reader at the same time has put this data on that server. Let’s not forget that otherwise this won’t happen and it means that the webXR will be able to. Access that data. And as you said, Fabian, if that data is changed, such as someone adds a highlight while in webXR, or a note that will be written back, and reader will then have the job of checking if there is a change and if so, choosing to incorporate it or not.

Speaker5: Is that correct?

Andrew Thompson: That sounds correct. From the the sense of us just brainstorming. Should work in theory. It’ll take some testing to verify a lot of this, but Yeah, as long as you’re getting the data up on a server, we can get it down in one way or another.

Frode Hegland: So that also means that we can indeed, as a bonus, have the actual full PDF in the headset. Should the reader, for whatever crazy reason, want to look at it. But we will focus our interactions on the metadata and wider interactions. The full PDFs will be uploaded after the metadata.

Speaker5: Right.

Frode Hegland: Yeah. Yeah. Let’s see your message. Okay. I’m very grateful for our clarifications and simplifications here.

Speaker5: I’d.

Frode Hegland: Like you, Andrew, to please focus on this for the next few days. So that a Monday.

Andrew Thompson: It’s. Unfortunately I can’t. Because we don’t even have the importing working yet. So I, I think we need to get the importing working before we start importing from a server. That’s what I’ve been working on right now to lay the groundwork for this. But it’s it’s not like an easy task, so I’m not going to have it by Monday, unfortunately. No, no, I’ll be doing my best.

Speaker5: That’s fine.

Frode Hegland: Before. Yeah. Okay. Fabian, you got to go. That’s. That’s fine. Thank you very much for being here today. I just want to address Peter’s concern here.

Frode Hegland: Okay. Why don’t we just do this? Then we’ll do a. This is just a suggestion to the community. In addition to adding your email address, you also add a password which is made clear is not a private password. With that password will be included in the URL. Let’s not even call it a password. Let’s call it a pet name or token or something so that it isn’t super easy to just get guess someone else’s email address. We’d have to guess.

Speaker5: The access.

Dene Grigar: Code is an access code.

Speaker5: Thank you.

Andrew Thompson: Let’s just ditch the entire email thing and just have it generate like a six digit code that it saves it as. And you just put that code in your headset.

Speaker5: Yeah. Because then you.

Andrew Thompson: Could also share that if you want.

Frode Hegland: So when you first open up the headset, it says, who are you? And you copy and paste or type that six digit access code.

Andrew Thompson: Yeah, or eight, depending on how secure we want to be, but six should be fine.

Speaker5: And so that means.

Frode Hegland: That In reader, you would have to enter this code and it does that. Okay. That’s fine.

Andrew Thompson: Yeah. I don’t know if you want people to enter the code, then maybe they do just make their own username. But we don’t need email. And username is what I’m saying. We just need to pick one. Yeah. No, because we don’t care about security for it.

Frode Hegland: I’m fine with the. It would be really annoying if someone just realizes they can use their professors email address and they can see what they’re doing. So, you know, hey, we may get lucky. This may take off despite just being kind of a demo. So let’s let’s be a little. Yeah, we’ll do a number.

Speaker5: We’ll just.

Frode Hegland: Do let’s do a seven digit because it’s in between. I don’t know any thoughts on that, guys.

Dene Grigar: Something that’s short so we can remember it. I mean, I don’t want it to be like 16 characters just to keep out the riff raff.

Andrew Thompson: It depends on if you’re able to customize it yourself or if it’s generated randomly. It generated randomly. You want to keep it under eight because it’s hard to remember otherwise. If you can make your own, you could let them go up to 12 or something. And it’s just a username at that point. Either one is fine, we just need to pick.

Speaker5: Well, okay.

Frode Hegland: How about There are systems that will generate code names, so maybe just two words.

Speaker5: That they exist, right?

Frode Hegland: It’s easier to remember, you know, rat poison than a long number. Does that make sense?

Andrew Thompson: Yeah. People will if you’re giving them actual like, discernible names, though, people are going to then want to change that because they’ll be like, I don’t want to be rat poison. That’ll be something else.

Speaker5: No, but.

Frode Hegland: Arthur, systems that have kind of cleaned them up so there aren’t, you know, bad words like swear words or violent words or racist words or that kind of.

Andrew Thompson: Stuff. Right? But if you give somebody like a discernible name, they’re going to want to be able to customize it. That’s just human nature. Maybe we don’t care because this is kind of just an academic thing.

Speaker5: Well, it’s.

Frode Hegland: Nice to just keep moving, you know, over the next many years of this. So maybe we go back to the whole idea of having name or email address plus 1 or 2 characters then.

Speaker5: Just, you.

Andrew Thompson: Know, yeah, it ultimately doesn’t matter from a programmer perspective. Anything that’s unique and isn’t easily guessed by someone else unless you choose to share it.

Speaker5: Okay.

Frode Hegland: Email address plus port numbers. That’s reasonable to remember. No.

Speaker5: Anyway, this can.

Frode Hegland: Always be decided later. So looking at our agenda here, I think we’ve done quite well.

Frode Hegland: One thing we really need to do now is look more at case studies and Rob and Peter, you’re not students at the moment. It’s no professors. But your case studies are very relevant to. So if you feel like writing down.

Speaker5: I’m on a computer.

Frode Hegland: I want to do this. I’m in a headset. I want to do this. That is extremely valuable. I want to try to do that myself as well.

Speaker5: Just going through a.

Frode Hegland: Okay. Anything else for today? Are we actually, for once in history, finishing a bit early?

Dene Grigar: I had something else I wanted to bring up, and now I can’t remember what it was. It was very important. Oh, I hate when I do that. It wasn’t about Sunday. You and I need to talk about Sunday. Like what? We’re going to. Actually, I don’t know. Do we have a formal presentation on Sunday?

Speaker5: I think.

Dene Grigar: I remember what it was. We need to start thinking about the hypertext conference and what we’re going to be giving a paper on. I mean, we can give a demo, right? The demo is a no brainer, but are we going to do a paper? And if that’s the case, you and I need to start writing it.

Speaker5: I think we.

Frode Hegland: Need to do that, but I think it probably is useful to think about it in terms of a presentation. And then paper and demo comes out of that where, you know, I want.

Dene Grigar: To. Nothing you do at hypertext is without a paper, remember. I mean, you have to write a paper.

Frode Hegland: It can also be framed as a paper. But I really think it needs to be quite big, which is why it shouldn’t be just one thing. Of course, our webXR will be the centerpiece of it, but you know, it should be part of a flow. So we need to think about that. But, Denise, should we try to talk on Friday then?

Dene Grigar: Yeah, that’d be a good day for me. I have let me see if I’ve got a few things on my schedule. I know I’ve got skin cancer, so I go get some skin cancer work done.

Speaker5: I’m like, oh.

Dene Grigar: Okay, but no big deal. It happens all the time. Eastern European skin. Right? Right. My mother. So I’ve got that on Friday. But there’s something else.

Frode Hegland: Tomorrow’s also fine for me.

Dene Grigar: No, but let’s let’s shoot for Friday and let’s shoot for, like, something like nine in the morning.

Frode Hegland: Oh, that’s an hour after today, right?

Dene Grigar: Yeah. And by the way, next week, folks, we’ll be back on the same timeline, right.

Frode Hegland: Well, indeed. So we will stick to American time. Only us Brits will change.

Speaker5: Yeah. Okay.

Frode Hegland: I’ll see all of you on Monday. And I’ll see you on Friday. All right. I’ll see you an hour ago now, so to speak. All right.

Speaker5: All right.

Dene Grigar: Bye, everybody.

Frode Hegland: Thank you everyone. Bye for today. Thank you.

Chat log:

15:03:00 From Peter Wasilko : Good Morning, brunching off cam.
15:03:46 From Frode Hegland : https://public.3.basecamp.com/p/osctnCWNDkXwunxedzbdcv16 for Agenda and https://youtu.be/wxPifiCieh0 for a bit of a view
15:04:19 From Dene Grigar : https://public.3.basecamp.com/p/osctnCWNDkXwunxedzbdcv16
15:04:55 From Fabien Benetou : Reacted to “https://public.3.b…” with 👍
15:05:26 From Peter Wasilko : Apple really needs to let Web XR access the full resolution of the AVP, otherwise it is a major disincentive to purchasing their hardware.
15:05:56 From Peter Wasilko : Hopefully the antitrust action will force them to become less of a walled garden.
15:08:32 From Dene Grigar : Apple is facing anti-trust cases in Europe right now.
15:08:39 From Frode Hegland : Reacted to “Apple is facing anti…” with 👍
15:09:52 From Fabien Benetou : (still watching)
15:10:24 From Fabien Benetou : k
15:10:34 From Peter Wasilko : Reacted to “Apple is facing anti…” with 👍
15:11:23 From Peter Wasilko : Was it recorded?
15:11:38 From Peter Wasilko : Eager to see the the play back!
15:12:05 From Peter Wasilko : URL!!!!!
15:12:08 From Peter Wasilko : Please!
15:12:41 From Peter Wasilko : Has anyone done an academic study of IF?
15:13:02 From Dene Grigar : Twisty Little Passages
15:13:07 From Peter Wasilko : Thanks for the pointer!
15:13:09 From Dene Grigar : The MIT Press
15:13:14 From Peter Wasilko : My favorite imprint!
15:21:34 From Peter Wasilko : I wonder if we could get the SimulaVR folks to attend one of our meetings. They are primarily focused on Work uses of XR so they might want to be supportive of our work and actually listen to our feedback. Making suggestions to Apple feels like sending messages in bottles through the Event Horizon of a Black Hole.
15:22:11 From Peter Wasilko : https://shop.simulavr.com
15:24:07 From Dene Grigar : good idea, Peter
15:24:48 From Frode Hegland : (ADDED https://shop.simulavr.com to the list to consider)
15:25:05 From Rob Swigart : Has anyone watched 3 Body Problem? Future VR.
15:25:16 From Dene Grigar : Is it good, Rob?
15:26:07 From Frode Hegland : Reacted to “Has anyone watched 3…” with 👍
15:28:04 From Dene Grigar : simulation
15:28:19 From Dene Grigar : Sim-Stem
15:28:39 From Peter Wasilko : Curtis Hickman calls it Hyper-Reality if other senses beyond sight are leveraged.
15:28:41 From Rob Swigart : By 3d episode it gets very good.
15:28:54 From Peter Wasilko : His book of that name is superb.
15:29:07 From Dene Grigar : i will read it
15:29:07 From Rob Swigart : Full cerebral VR, direct link to the brain.
15:29:37 From Dene Grigar : yes
15:30:15 From Peter Wasilko : We might need Elon’s neuro-link tech for that!
15:30:26 From Dene Grigar : noooooooo!!!!!
15:31:14 From Dene Grigar : My gut feeling is that Apple is leaving the keyboard behind and aiming us toward voice controls
15:32:03 From Rob Swigart : I suspect voice control won’t work for me for long. I have a lifetime of typing…
15:32:04 From Peter Wasilko : Voice falls down in public fast.
15:34:35 From Peter Wasilko : I wanted to throttle Steve when he didn’t use DIFFERENTLY, different is not an adverb.
15:36:34 From Peter Wasilko : Let’s party like it is 1999!
15:36:58 From Dene Grigar : It is like saying travel safe instead of safely
15:39:47 From Frode Hegland : It was not used as an adverb
15:39:55 From Dene Grigar : right. It is an adjective
15:40:10 From Dene Grigar : or better a noun
15:40:30 From Peter Wasilko : I can be so literal at times, to me ‘travel safe’ is a fire proof lock box I can bring with me to a hotel.
15:40:34 From Rob Swigart : which refers to the missing subject. Think Different.
15:40:49 From Fabien Benetou : some related concerns I had with an in VR editor https://twitter.com/utopiah/status/1642077651920625665 last year
15:41:02 From Frode Hegland : https://futuretextlab.info/current-testing/
15:41:12 From Frode Hegland : https://futuretextlab.info/2024/03/20/reference-bock-interaction-5/
15:41:34 From Peter Wasilko : Replying to “which refers to the …”

Ah, insight dawns on me
15:57:49 From Fabien Benetou : Zotero from BitTeX or any format it can import then, point being a popular tool that people actually use, keeping it low hanging fruit BUT a way to USE the result of the action within the workflow of academics
16:05:01 From Fabien Benetou : so from https://www.zotero.org/support/dev/data_formats then maybe from Andrew’s comment using CSL JSON
16:05:03 From Frode Hegland : Exactly Fabien
16:05:31 From Fabien Benetou : related https://github.com/Juris-M/citeproc-js
16:05:49 From Peter Wasilko : Reacted to “related https://gith…” with ❤️
16:05:54 From Peter Wasilko : Reacted to “related https://gith…” with 🚀
16:08:11 From Rob Swigart : Is it possible to sync safari bookmarks or reader list so it appears in Vision?
16:09:05 From Frode Hegland : It should be automatic in Safari I think Rob
16:09:34 From Peter Wasilko : Zotero has awesome web scraping in browsers
16:09:55 From Rob Swigart : should be but isn’t
16:10:48 From Frode Hegland : Much of this is version 1 issues I think Rob
16:11:52 From Peter Wasilko : Pagination
16:12:03 From Peter Wasilko : So you can cite to a page in the original PDF
16:13:37 From Peter Wasilko : A Concordance would be really useful, it is a key tool in everything boxes like DEVON Think.
16:14:54 From Fabien Benetou : example of “flattening” https://twitter.com/utopiah/status/1264131327269502976 to get code sequentially with code https://gist.github.com/Utopiah/26bae9fecc7a921f8bfd38cf5fc91612#file-logo_vr_hubs-js-L44 which could be used for other content e.g reference then a format, e.g BibTeX and CSL JSON
16:15:10 From Peter Wasilko : https://www.npmjs.com/package/html2json
16:19:11 From Peter Wasilko : Overlapping Selections
16:19:39 From Peter Wasilko : Selection Tagging/Annotation
16:25:07 From Peter Wasilko : We could preview ideas in 2-D if had some sort of plug-in architecture so Andrew could focus on devising a contextual menu to let readers chose which plugin to run (say ‘highlight passages by readability index’) and display their results as an overlay or in a side panel.
16:30:40 From Peter Wasilko : It might be easier with a dedicated Electron based client with full desktop app privileges where we could have our own security policies rather than focusing on opening things in a standard browser.
16:33:07 From Dene Grigar : brb
16:33:22 From Rob Swigart : Reader is 2.99 in the app store for Vision.
16:35:04 From Frode Hegland : You should have a free version Rob in Testflight
16:35:18 From Rob Swigart : Have t figure that out
16:36:27 From Frode Hegland : Should be an email Rob.
16:36:36 From Fabien Benetou : example of a Zotero minimal library a WebXR could read https://webdav.benetou.fr/ExportedItems-FromZoteroAsCSLJSON.json but also write
16:40:13 From Fabien Benetou : e.g hmd.link/?https://fabien.benetou.fr/pub/home/future_of_text_demo/sloan/?data=https://webdav.benetou.fr/ExportedItems-FromZoteroAsCSLJSON.json&user=fabien@benetou.fr
16:40:58 From Peter Wasilko : We need a security layer or a hacker could pretend to be anyone whose email addy he or she knew.
16:41:05 From Fabien Benetou : have to go in a min
16:41:49 From Peter Wasilko : Makes sense
16:41:53 From Fabien Benetou : gotta go, bye bye
16:43:48 From Peter Wasilko : See Raskin’s take on passwords in The Humane Interface
16:44:49 From Peter Wasilko : Two adjectives and noun
16:45:40 From Peter Wasilko : Handsome Short Bookworm

Leave a comment

Your email address will not be published. Required fields are marked *