22 May 2024
Frode Hegland: Playing.
Speaker2: Oh.
Frode Hegland: So, Adam, next week, this time you will be here. You will be doing. Yeah.
Adam Wern: So nice.
Frode Hegland: That’s so cool. Except I expect you’ll be just on the headset the whole time. We won’t get to talk to you, but, you know, that’s a risk I’m willing to take.
Adam Wern: Yeah, I think we can go. Go to a few bars and restaurants with the headset, of course, but that as well.
Frode Hegland: No comment. Have you done that, Dene Have you been using that outside of that? That’s an interesting thing actually, how people react and mostly don’t not that I go to a bar, but, you know, a coffee. Bitter coffee. So Andrew obviously you’re going to do a full update soon, and we’re very much looking forward to it. In fact, while we’re waiting, I’m going to have a quick look, see who else joins us. Anything else you want to say, though?
Andrew Thompson: It. A lot of the changes are like minor, so I’m not sure how obvious they’ll be unless you directly try the previous one. But just kind of. Let me know if anything still feels janky with the interactions, because I tried really hard to sort of polish a lot of the document stuff.
Speaker5: Can’t do. And. Well. That’s weird. So. Yeah.
Frode Hegland: It is, of course. No, that’s. This of course, listed.
Speaker5: On our.
Frode Hegland: Page.
Andrew Thompson: Oh, I got to see how crummy the visuals are in the vision. To have that vision with me right now. And my goodness. Yeah, the whole resolution is low. I mean, we knew it was, but it’s it’s pretty crazy. Even the library is, like, barely readable and that font is pretty large.
Adam Wern: Was it like that from the beginning, or is it something we’ve done?
Andrew Thompson: Well, I didn’t test every update on the vision, so I can’t say for certainty that it was from the beginning. But I am almost certain it’s been the same. Because I haven’t touched the rendering system in ages. I think it’s just because the font is bigger on the library so you can read it. Still, it’s just not good. The document font is just like a hair smaller, and it’s it’s enough that it gets completely muddled. Okay.
Adam Wern: At least on the quest side, things are changing. And now and then when the browser is updated so you can’t really know what you’re what will happen. Always the change resolution, the rendering techniques along the way, I’ve noticed.
Frode Hegland: Hey, we have visitors. I’m Mark Fabian. Fabian, please text me or through some other electronic means. Give me your information about when you’re arriving in the greatest capital in the world. I mean London, in case there’s confusion. Right. I think it would be nice if Leon could join us. I don’t know if he is or Peter, but I think we should start now. Your guys okay with that?
Frode Hegland: I’m going to hang on. Just going to send a link. Of the. No, that’s not what I meant to do.
Speaker5: Dana, could.
Frode Hegland: You please do that? Could you please paste in a link to the agenda?
Dene Grigar: You can’t do that.
Speaker5: That’s weird.
Frode Hegland: It’s being a bit funny. So $3, the.
Dene Grigar: Public link gets shareable. Public link, copy to clipboard. Yeah.
Frode Hegland: There it is. Yeah, yeah. No, it’s fine, I got it.
Dene Grigar: Okay. Never mind.
Frode Hegland: Yeah. I don’t know why it went. Sorry. The.
Speaker5: Where is it?
Frode Hegland: It’s so annoying. The chat kind of hides when you make the window a bit smaller. Hello, Mr. Nosh. Okay. So Right, let’s have a look at the. Schedule for today. And I would actually like to start with a little bit of update. What I’ve been doing in here. Reader, please scream if you cannot see it. So there’s a few things not not a huge amount of things, but I got an email from a user asking if we could make reader support dark mode. Of course we can, because reversing a PDF, you know, the images of problems. We tried it. So we have this view, as I think many of you have seen, where you select either the whole document or a bit of text and you do spacebar. So you get like this kind of a reading view or a lift. And so what I’ve done since we have changed. So that if you are in dark mode on the Mac and you hit spacebar, this is what you get. So that took almost no programing, and it took two days from a user request to doing it. And I’m very, very excited by it because it shows that if people have disabilities, there are sometimes things we can do that actually helps them. So that was that was really, really neat. The second thing is, if I do something like this now. Well, yeah. First I wanted to show you because today we’re going to really look at the interface of the map. This is what I’ve done with the control click menu in reader. There’s a two smaller. Can you all see.
Dene Grigar: It’s small, but it’s okay.
Frode Hegland: Okay, so I’ve been polishing and polishing to get rid of stuff because a large context menu is just not useful. So we have what you saw the select text and lift. You get a spacebar. And a note is on here like this. Copy to cite the document. This is when you don’t have text selected. And then finally ask AI where it sends these prompts with the whole document. If you select text. It’s different. You have left, but only for the selected text copy. You can now copy, text or copy it as a quote for automatic citation. Find. And I think this is really cool, actually. So let’s do a Commodore. If we do find in document, it shows you where it appears in the document. It happened to be just once here, but we also have these two other options. But find in library searches the library and there was nothing. But it’s it’s nice to be able to do it then. Then finally. Find online. So it’s just a Google search. Right? But let’s be honest, we often do Google searches. But what I think is really cool is that the keyboard shortcut is just the enter key. So that means that when you’re reading something, you come across something you want to look up. Once you have learned by doing this a few times, that’s the oh, I’m so sorry. I have to open that door. Would you mind taking over, please? I apologize sorry.
Dene Grigar: Okay, I don’t know anything about Reader, but let’s go on to invitations. I know a lot about invitations, and so let me share a screen with you. So we have two sheets. The first sheet is the list of people that were inviting to, you know, submit to the book and to come to the symposium. Everything in yellow are the people I’m inviting personally. I can get this over a little bit better. And then I have a second sheet, which are all the people that are the local from the state and by tri state area who are involved in VR and technology and games. And so these are the folks that have already RSVP’d. They’re coming and come a couple of them wanting to maybe submit something for the book, Toby Roberts especially. So as you can see, we’re already sending out invitations. The ones that are yellow or the ones I’m sending out. So quite a few people. If you have some folks you want to add to this list, please let Frode and I know so we can add them. But right now there’s about 35 ish on this list. And as you can see here, about 12 on this one, okay, we have room for 50 people easily in our In our room, and I do plan to bring a couple of students that are Doing really well, and the spatial computing class that I’m teaching in the fall. And of course, a few lab people at Andrews will be there. I’ll be there, and probably Greg Philbrook. People like that. Holly will probably come. And that’s the update on the invitations. Frodo, you ready to take over again? Yes.
Frode Hegland: Sorry. And thank you. Okay. There’s not much more to show, but there is a little bit. And So they I’m so proud of this context menu because there’s hardly anything on it. Right. You saw how when people normally on a context menu, you don’t have the keyboard shortcut listed, right. This is to help people learn what the keyboard shortcuts are. For instance, if you want to highlight text to make it orange, you do oh, one thing that Adam suggested quite a while ago, actually, and twice is in our environment when we can next to something like a context menu, obviously not as flat and boring, we should probably have the voice prompt for the thing. So people what they can say because it’s not easy always speaking into the void, right? But I just find it. It. Yeah. Mark, please.
Mark Anderson: Just a very quick one. And I’m someone like Peter with his wonderful trove. Will come to the come to the rescue. But I’m sure I remember while you were talking about the the relevant point about not having overloaded context menus, there was something where they had a sort of thing where you’d open the context menu, and if you did something like a hover for a while, it would actually expand. And what it was basically doing is it was it, if I remember it had some, as it were, some always in a very small number of always in things. And then actually the things that you used, because the underlying issue is that most of us actually want a slightly different context. Menu. That’s just the way, that’s just the way the humans are. They refuse to use the tools in the way that they were designed. So, you know, we want a slightly different mix. I’m not suggesting I just sort of put that out there before I, before I forget to market, because I, I’ve no idea actually sort of how easy or difficult that is to do. But it’s just worth mentioning in the context of the, the, the issue you rightly mentioned, which is having overloaded context menus. Thanks.
Speaker5: Yeah, it’s.
Frode Hegland: An important point when it comes to not our webXR, but this there aren’t that many commands, so it’s okay to have most of them. They just need to be done in an intelligent manner. When we get to what we’re doing for webXR, we definitely need obviously to get a default set. But I do agree we should really look at how that can be customized to different users in at least those ways.
Speaker5: Berm now. You know, before.
Frode Hegland: I get to the final thing you notice, this says find and document. Find and library find online. Even just getting these phrases right actually took quite a lot of time and back and forth and doing something and realized and made no sense. But here’s here’s the interesting. Here’s the important bit. So let’s do extract names using GPT four. Oh, it’s been very slow today and there’s been some API issues. Anyway here it is. Right. So this is fun. You’ve seen this before. Trust me. There is something new and I’m okay with these. But I don’t need Deenie because she’s already the author as is Rob. So you know I can play with this. This is plain text. And then here save metadata. I press that button and it’s now a new sheet. This is the Peter style or is Peter here? Yeah. Good. This is the Peter style. Extra. Visual matter. You’ve kind of seen this as well. So it says visual meta colon. And this is the name of the prompts, puts it in the whole style. And then it has the prompt, the engine, the operation date and the location in case these variables matter. But that is not what’s new. Okay. This is what’s new. I already said, I think that if you don’t select text for your ask, AI does the whole document. Right? So if I now do extract names, please have a look. Oh, what in the world?
Speaker5: Hang on.
Frode Hegland: It may have been because I edited, actually.
Speaker5: Because what it.
Frode Hegland: Crying out loud. What it really does. Is it? Okay, I’ll show you here without editing it. That’s an interesting book. So I now did summarize save metadata and here it is right. Yet another one. So if I now do it again. It jumps to that page. Right now that is not useful. But the reason I’m showing it to you is that what I’ve asked the guys to do is that if you do an ask, I ask, I command again. It will take it from here and put it in the results dialog so it looks to the user that it’s done instantly. And instead of the dialog having saved metadata at the bottom, it’ll have a new button that says refresh.
Speaker5: So it’ll do.
Frode Hegland: A new search if you want it done. So this goes into what we can hopefully feed into the JSON, which can then hopefully go into what we’re building because these extracted names here. And let me just zoom in a bit. I think are very, very important. And as both Peter and Mark have been talking about recently, are they correct? One nice way to test that is once you’ve added it, select it in the metadata. Do command F. To see where it shows up. And if it doesn’t seem to show up elsewhere in the document, you know we have a problem. So here we can see it’s in two places so we can start. Experimenting. Now, any questions on that kind of round trip of adding metadata manually by the user that they are comfortable with and how it goes into our wider world? But you all just stunned. I’ll accept that. That’s. That’s fine. Let me just.
Dene Grigar: I’m just processing. I mean, I’m processing. Like what? How in what way are we using this?
Frode Hegland: So the to without distracting by by showing the whole idea is that the end user and their environment, whatever it might be, can choose to add visual media manually or through AI or whatever it might be. And I’m really talking about entities of various types. Personally, I’m not interested about doing analysis where it may hallucinate. That’s outside of the scope for this. This is then something that the software is aware of. So when the user exports the library, this is included with the metadata.
Speaker5: So what is.
Frode Hegland: Included when Andrew gets it in his XR environment? Are all these extracted names may be headings if we don’t already have them. So that means that when we’re the map view. These are entities that are known about these documents.
Dene Grigar: Well, can I ask a question? What do you mean by I know what I mean by hallucinate? That’s the William Gibson of the whole notion of cyberspace is right. What do you mean by hallucinate?
Frode Hegland: But I mean, when I makes up things that are not correct or were not in the source data. So that’s obviously a huge issue. And it’s something that I’m sidestepping by using the AI in a very limited way. But the end user should be able to use it for whatever they want, at whatever level of risk they are comfortable with. That’s the thinking anyway.
Dene Grigar: Okay. That helps. I also think of the LSD hallucinating. I wasn’t going to bring that one up.
Speaker5: No comment.
Dene Grigar: Not something I’m into, but whatever.
Frode Hegland: Mr. Anderson.
Mark Anderson: Yeah, I was.
Speaker5: Here.
Frode Hegland: As of today. Oh, sorry. I just have to say, Mark, I finally got confirmation from grad school that all my paperwork is done. So you and I are now peers for the first time in six years.
Speaker5: Okay, over to you then. You’ve always been peers.
Mark Anderson: I think the hallucination also is a slightly save the save the face of the I people who, you know, don’t want to admit that it makes stuff up. But anyway, that’s by the way the point of the question was I go back to the processing gesture on the page, and because today we’re talking about stuff going towards the demo is. So the thing we’ve just looked at in reader is essentially about making metadata lists, because I do we I’m not sure we actually have a demo map view yet, so we don’t actually know what the map is. But what you’re what you’re saying is essentially this is a way we’ll have a source of information to put into the view when we have it.
Speaker5: You, John?
Frode Hegland: Yeah, absolutely. And just just really briefly on the whole notion of hallucination, that is the whole point of generative AI. A generative AI has to hallucinate. It’s just the problem. Sometimes they hallucinate in the wrong way. But that’s more of a philosophical point. This is what we will later on after the Android update, when we have the little interaction thing, we’ll go back to something similar, but much simpler than what we discussed on Monday, which is exactly what should be available, which will be fun.
Mark Anderson: Okay. And a quick thing, because I, I’ve been buried in paper writing all day. Did it get into the agenda for today to just check off with people? I know it’s only a two page doc, but the paper is almost done for the essentially the marker paper that we’re going to put in for the demo in Poznan. And I know there were some, some outstanding issues that people wanted to raise.
Dene Grigar: I’m going to talk about that in the next section. Cool.
Mark Anderson: Yeah. That’s fine.
Dene Grigar: Okay. That’s coming up. Hang on to that. I need the final blessing of everybody.
Frode Hegland: Hang on to your hat. Indeed. So then the European meetings to satisfy people who have to deal with those timings will be 10 a.m. Central European on Mondays. I don’t know exactly what there will be. Nobody knows. But I expect there will be primarily around Adams experiments, some design work. They will of course be video recorded as these meetings are and written up in the same way. As these as the scope may change or not. Of course, we’ll all keep having that dialog, but it’s better than just informally meeting at coffee shops and keeping everything offline. And Fabian, if you want to have specific sessions, either sharing with the quote unquote Adam Time or separate or whatever for what you’re building for a period of time, of course, we’ll try to organize that either in the evening or daytime or whatever. But for now at least, we have one victory. It took a whole Google spreadsheet spreadsheet to get there. Right. Any other announcements.
Speaker5: Before suggesting anything? What’s that?
Fabien Benetou: I will see how that unfolds before suggesting anything. Just iterating to see if maybe it fits perfectly there. Maybe something else dedicated. So I’ll I’ll wait and see, but I’ll try, as I mentioned in the spreadsheet, to to attend them because it’s good timing for me.
Frode Hegland: Okay, perfect. Any other announcements? All right, so Denise. Overview.
Dene Grigar: Two questions. First, we need an we need an image. And I want one that I gave you an example. Right? I want one where there’s actually lines. So we can see the connections so that we can show our colleagues that we’re talking about hypertext. You know, more emphatically with an image. Images mean a lot. So if we can pull something today, Andrew, when we’re in the lab together, that’d be great. And then my next question. Mark. Sorry. Yeah. You made a comment about the one of the references we might need. Not need. You said that in the slack channel.
Speaker5: I’ll share that.
Mark Anderson: I’ll share the document. One moment. Perfect. Just.
Dene Grigar: You’re going to bring it up in Yeah.
Mark Anderson: Yeah, yeah, in overleaf. Overleaf just
Dene Grigar: I’m getting comfortable with Overleaf. I actually used it last year with Klaus, but he did all of the bibliographical stuff.
Mark Anderson: So hopefully you’re seeing an overleaf.
Mark Anderson: Oh, sorry. Wrong one. Let me just go the right. Just go to the right. I like paper, so I just compile. I’d take a second. There we go. So it’s very short. It would appear that I think because this front matter on one page and matter on the other, that acm’s format will not cope with having a full width diagram. So at the moment we’ve got a placeholder here. That’s a happens to be a shot of the author map. But I mean, in a sense that’s just place holding. So we want that we’ve got the rest of this page to fill. In fact, we can if we want to put more text in, we can because the references aren’t counted into the two pages, just in case people have stuff to raise. We could possibly put certainly another image in because we have some stuff that we’re doing in VR because I think to give it might be useful to give a flavor to people who’ve never seen, you know, who’ve never sort of, in a sense, used the XR experience just to give them an idea, the point of the references. Okay, so there’s some confusion, the drafting, the way the thing actually works. For those unfamiliar with Overleaf or with LaTeX, should I say is the way it works is you put a citation in the text in the LaTeX, and you put you put this item in the bib file, and then LaTeX does the rest and it does the number. So you don’t you don’t manually put in numbers as you might do in a word processor or something. It will do that, I think in this case that it’s a bit difficult. I know if I can make it bigger for you
Speaker5: We can see you.
Dene Grigar: Do that. Mark. How do you make it? I mean, how do you make the text bigger? Not the fonts, but the actual page. Bigger?
Mark Anderson: Well, you can you can do this if you’ve got two screens, like some of us would put one on one screen and one on the other. No, I agree and anyway, I tend to, I tend to proof it outside overleaf, but as a quick check. But essentially I think this sorry, this this Mark number four, referring to visual matter really goes where that, that placeholder got put in because the two, the two the two phrases, I think both, both relate to A relate to to visual matter. Which is this, this reference down here? And I can fix that quite easily by that.
Dene Grigar: Sounds great, Mark. And then you had something else you wrote in the chat on the slack channel about what we could say.
Mark Anderson: Well, it’s just that essentially you’ll notice we’ve got half. We’ve got half a page of empty, empty space which we don’t have to use, but we can if we want. So if there’s anything we wanted to do, just I was just sort of spitballing through possible things. One is that I, you you might choose this is really more for the the PiS. I know we I know we mentioned everyone in the acknowledgments, but you could, for instance make a statement as broadly as of the main contributors who’s done what?
Speaker5: Mark, if that’s felt useful.
Frode Hegland: On that note I do think we should consider actually removing the last two names in acknowledgments because they’re great people, but they haven’t had anything to do with this work.
Speaker5: Okay?
Frode Hegland: Just because they’re members of the community, that’s great. But, you know, everyone else here is really made this. What’s the general feeling? What do you feel, Danny?
Dene Grigar: That’s fine. I’m happy to do that. I didn’t want to leave people out, and I wasn’t sure if you’re talking to people on the side. I mean, you’re you have conversations with these folks that I don’t have. So they might have influenced this a little bit. I’m going to put a an after Leon’s name. And then Bob sweet Rob Swigert. Okay.
Frode Hegland: That’s great. Let’s hold hold the acknowledgment just for a second. So should I’m wondering if maybe we should acknowledge Ismail, who has been our spiritual help and who really got us home.
Dene Grigar: Okay. That’s fine. Add anybody that you want on this. I’m not trying to leave people out at all. No, no, no, I was just saying everybody.
Speaker5: I’m not saying.
Frode Hegland: That. I’m just saying I haven’t gone through the reference, the acknowledgment. I just thought we should do it together.
Speaker5: Mark.
Mark Anderson: Can I make a suggestion? I think you’ve got two lists here. Really? I think I think viewed from the reader reader’s perspective. And sort of trying to sound slightly outside the authorship at this point. I think what would be useful is to know who are the people who broadly in a sense, as in this meeting here who’ve been regularly contributing. And I think it would be useful to know that, for instance, that, you know, Andrew has actually been doing Andrew has been doing the primary programing with, with much input from Adam and, and and Fabian just, just it’s just that sense of because if I wanted to correspond, one of the points of the, of academic communication is that people may want to correspond with people. So it helps to know. So rather than someone have to write to Dean when they really wanted to write to Andrew. That could be kind of useful I think. It might be useful to do. If you want to put a whole laundry list of people who broadly are part of the wider future of techs who you feel have impinged on this. Bearing in mind this is not about the future of techs. This is about this particular project. Yeah, but.
Frode Hegland: That’s not what I’m saying, though. I’m just saying that Ismail and Bent really got us Sloan. So that’s why they really made this project happen.
Speaker5: Well, what I suggest what I would say this.
Dene Grigar: Then can I say this? We would say we’d like to acknowledge the members of the future of techs and XR team. Then we say Fabian, Adam, Brandel, Peter, Leon, Rob and Andrew Andrew Thompson. And then we can say and we’d also like to acknowledge the support from our two members of the advisory committee. And that would be Ishmael and Vint Cerf.
Mark Anderson: Yeah, that that would that would be I think that’s that would be the, the best way to do it. I mean, and I think it’s clearer to the reader.
Speaker5: Happy. Yeah.
Frode Hegland: Very good. Thank you. Do you.
Speaker5: Want to put that out that.
Dene Grigar: Right now I’m going to write that right now.
Speaker5: Cool.
Frode Hegland: Oh you’re driving it, Danny. Okay. I just I just thought it was Mark right now, but no that’s great. Brilliant.
Mark Anderson: No it’s we can see that because I that you can tell. So it’s my I don’t see myself because I’m obviously logged in. But if, if, if others. Because you have access as well. So if you were to log in, I’d see two little things up here.
Speaker5: Right. Okay. Right. Well, that’s all very good.
Frode Hegland: And I’m going to be sharing a slightly better version of my paper for contributions by those who want again soon. I started overcomplicating it, and I’m now trying to bring it back. I put something in slack if you want to read it. It’s not that interesting. Peter. When he is ready with his paper, we’ll also share it on slack for comments. Right, Peter? Excellent. But then for the grand event we have Andrew. So over to you, Andrew. I’m also going to give a, a link in here for.
Speaker5: Where people. Hold on. Hold on.
Mark Anderson: We we haven’t. Can we just check? We formally finished with this because I know we haven’t. I know Dean wants to get essentially get this tied down. And I know there’s an image we haven’t made yet, but but by way of sort of trying to admit this is Marc.
Speaker5: That’s exactly.
Mark Anderson: Just let me finish a second by way of me trying to wrap it up, because at the moment I’m broadly just working as an oversight for the overleaf more than the author of it is that if there’s any more that anyone wants to add, speak now or it’s sort of done, because apart from the the two authors and any, any sort of tidying, I have to do will treat this one as done and then it’s out of the way because I know sort of Dean would like to move on from, from, you know, having to know if this is this is it or not.
Frode Hegland: Yes, exactly. So that’s why I was going to ask Andrew before we go into Andrew’s main demo. Andrew, what do you think of what you have that looks like me and hypertext and spatially that we should try to use for the screenshot here.
Andrew Thompson: The most like, aligned spatial stuff would be probably from about a month ago when I was experimenting with the safe load system. There’s a couple clips in the videos where I’ve got a bunch of the citations all laid out and a bunch of the, the found sources, and there’s lines connecting all of them. I was making kind of like a big web to show that it all saves and loads. That might be useful. I can also help, like, sort of create a sort of a tailored screenshot with Deeney when I’m in the lab today. If we have something specific in mind that we want for that.
Frode Hegland: I think that would be great. What do you think, Danny?
Dene Grigar: All right. Good. I get to say something. All right. I’m going to talk now about this paper. I was getting ready to say I just recompiled this. So it’s now in a, you know, updated form. I will download it and I will drop it into slack for everybody to see, because not everybody has access to Overleaf. Okay.
Speaker5: Can you, can you deny it? Can I just.
Dene Grigar: Finish my talk? Just shut up.
Speaker5: I can’t see what you’ve done.
Frode Hegland: I’m just saying. You said you recompiled. It hasn’t updated. I’d like it.
Speaker5: Has? Yeah.
Mark Anderson: It. If it right, I’ll recompile it again. It’s that. And when when this stops going around that’s done. So any changes if you the changes have all been on the second page as far as I’m aware. So anything changes to what you see. See it.
Frode Hegland: Now. Thank you. I just I wanted to concentrate what you were saying so I’d like to see it. I couldn’t see Ismail and Sir and Vint, thank you very much.
Mark Anderson: In fact it might be even clearer if we just do this. Because we’ve got stacks of space. Okay.
Frode Hegland: Then he did that really upset you? In which case I apologize.
Dene Grigar: I can’t fucking talk. Let me talk. I just let me say what I want to say and quit interrupting me. It’s driving me nuts. Okay, so I recompiled the document. When Mark looks it over and makes whatever final tweaks he needs to make because he’s the expert in overleaf, I am not. Then I and then he lets me know it’s ready. I will download a copy of this, drop it in our slack channel for the four of us to look at one last time before Andrew and I meet, so that Andrew and I could pick a picture, get it to Mark, and then have a final version by the end of by by tomorrow at the latest. And then I will submit this to the ACM Hypertext conference on our behalf. Okay. All right. Now talk.
Frode Hegland: And I apologize. I don’t know where your great obsession.
Dene Grigar: You interrupt me every time I try to say something. I can’t get a thought out. And you, before you let me finish, you keep talking, and that just drives me nuts. And you don’t. Don’t stop doing it. And I have a right to be upset about that. So quit doing it. Just say okay and don’t do it anymore. We’ll be fine.
Mark Anderson: I can’t find. Now I’m showing. I seem to have lost the control. Stick my hand up so I’m sticking my hand up. Now, just very quick. Thing is when we get down to the doing the image, it would be useful if you, having chosen it, have a caption in mind. And the ACM also require effectively we put in alt text that gets hidden in the PDF. So if you if you have a sort of description in mind, that would be super, because it saves me having to sit around and do one up. So as so essentially the bit is we need an image if you feel there are a couple, to be perfectly honest, if you feel that we’ve got room for at least 2 or 3. So if you feel there are more that when you’ve been looking at stuff, you know, or you just can’t decide, I don’t see any problem with putting a number of them in. The only thing that isn’t yet in the text because over and above the figure, we probably need just a little bit of shim text, and I can do the sort of words. Well, I can do the layout of that. But yeah. So essentially images with their caption and description or a suggestion thereof would be massively useful. And that note, I’ll shut up and I’ll stop sharing unless.
Speaker5: Anyone else like.
Dene Grigar: I like the idea of having a an image of the VR environment. So I think we’ll have at least two. And if we can find a few more, that’s great. I think the more images we can show, the better, because we’re trying to explain something to people that they have never probably experienced.
Mark Anderson: Yeah, yeah. And I think probably what we can do, therefore, is we can just put a little bit of preamble text before each one essentially saying, look, this is this is basically the nub of what we’re doing. The pictures will have will have a sort of short caption. And that’ll be fine. So unless anyone wants to comment any more on the what they see on screen, I’ll stop sharing so we can get to other things can be shown any more. Thanks.
Speaker5: Okay, thanks. Right. Okay.
Frode Hegland: Yeah. Over to you, Andrew.
Dene Grigar: I thought you were going to talk about the high resolution thinking paper now. Because next on the agenda.
Speaker5: I thought the.
Dene Grigar: Following. If we’re following the agenda, the next thing we’re talking about is the high resolution paper. And then we’re going to talk about Mark and Adam’s paper briefly, and then also Mark’s paper. And then Peter has a paper we need to know the title of.
Speaker5: I didn’t think we’re.
Frode Hegland: Going into detail about all the papers today other than the the group demo paper. I don’t have anything specific to say about that paper today. But the interaction design later on will be questions that are directly relevant to that.
Speaker5: But, Mark, did you.
Frode Hegland: Want to talk? Sorry. Usually we just talk about the fact that these papers exist. Mark, do you want to talk about your visualizing paper?
Dene Grigar: You’re on, you’re muted.
Mark Anderson: Not really, except I think my title is now going to be lower Your Page Breaks and prepare to be remediated towards writing tools for the digital now. Because I was trying to get something that actually speaks to what it is, but the gist of it is essentially that we’ve moved from the paper, the print age to the digital age, and our tools have moved with it. And I use XR as the provocation to say, well, we can do, you know, well, we’re remediating it now, except not very well. And the reason we can’t do it is we’re mired in, in sort of paginated. Behaviors because simply we haven’t really moved forward. And basically it’s an idea to say, okay, we can start to look at things. And I just think it feeds into the edge of what we’re doing here. And I know with Adam and I, I haven’t written anything, particularly on the paper. I have a stub for it. But we have some ideas in train about virtualization, and I think that’s the only other one that I’m sort of directly involved in. And I have said to Peter, well, it’s been brought up here, but I’m very happy to, to you know, have a look at your paper in draft.
Dene Grigar: May I ask a question? Frodo. What I was hoping we would do with the high resolution paper is just talk about it, just briefly, and you give us some direction on what we want, what we should be doing with it. I mean, you put it in the slack channel this morning, I think, and I haven’t had a chance to read it, but can you give us direction on what you want us to do on that paper?
Speaker5: Not really.
Frode Hegland: Because the theme, the discussion we had on Monday about what to put on the map became a very difficult discussion, as you pointed out, Danny. We used language quite differently. So in the interaction design section today, I really tried to simplify it, because once we know the kind of stuff that should be in the screen, and I’m very grateful that particularly Marc and Dini, you both helped with the initial condition for what should be. We now know what that is. I have been working a lot mentally on what the hell should be there and available, and I’ve hit some serious stumbling blocks that I look forward to discussing with you guys. The high resolution thinking paper is very simple, mostly. I hope that after a discussion today where we have a little bit more of marching orders for Andrew, so to speak that I can write something a bit more sensical in the paper. And based on that, everyone should feel free to feel how they can add.
Speaker5: Because the paper itself.
Frode Hegland: Is basically the notion of high resolution thinking. The end of it is basically saying, you know, a bit of a dream. People should do anything, any way they want. However, this is the constraints we’re working on, and that’s a little bit more about our work. But what it is worth saying and and that is it’s kind of a counterpoint to Mark’s paper because he’s saying, forget the rectangular screen. I’m saying that we probably do want to have a rectangular screen, but not a paper legacy one. And I don’t think Mark disagrees with this, but I’m just saying in terms of how we’re framing things, because I’ve been so incredibly lucky to be in touch with Bill Atkinson, asking him about HyperCard and why he chose frames rather than scrolling, because I do have a strong feeling that we probably do want to use card size things a lot. But what does that mean? That’s going to be a huge discussion because we don’t want to be constrained by cards. We want to be augmented by them and have the freedom to go into other things. And that is why, Andrew, what you’ve done for today with the outline view and these different views is just fantastic to see.
Speaker5: So that’s what I’m asking you.
Dene Grigar: Can I ask if you put your paper on overleaf so that we can actually make changes to it? Because when you give it to us as a PDF, it’s hard to add things to it.
Speaker5: Yeah. I mean, I.
Frode Hegland: Did word and other things last time, but it’s it’s not really ready for that yet once it is, because I’m not a very finessed writer. So for you guys to go in now and say this is a bit odd may not be useful. And by the way, Dany and Mark, I am redesigning reader in order to better allow you to make comments because we do not have a good comment system in the world. So there are some ideas that I’m looking at to support you.
Speaker5: Mark I.
Mark Anderson: Just a quick one. And I don’t mean this in a in a sort of snippy sense at all, but I would try and move yourself towards the formatting you’re using the in the ACM, which doesn’t use this highlighting in the opening of the paragraphs. It’s just easier to read it in the form you’re going to submit it. It just makes the proofing a damn sight easier, because you’re not having to mentally swap the formatting in and out.
Frode Hegland: Oh
Speaker5: Are you talking about?
Mark Anderson: I don’t know if that’s something that that reader is doing or it’s it’s just a choice. I say I’m not not making a judgment on it, but it’s just that as as the time crawls down the the closer the, the document is in its sort of styling and things to the submission layout, that the easier I understand.
Speaker5: Okay.
Frode Hegland: All right. It is the bolding was just to make it easier for you to skim. I don’t expect anyone to read it properly now. It’s not going to be there. Sorry, Danny, I didn’t see this as coauthored as much as being anybody wanting to to add to it later. But yes, I’ll open the paper up. And then, yeah, we can do that.
Speaker5: So just one other thing. Okay.
Mark Anderson: Go ahead. Sorry I spoke of you.
Dene Grigar: I was a guess. I thought this was going to be a coauthor paper you were leading. So this is your paper, and you just want us to comment on it?
Speaker5: No. Is that right?
Frode Hegland: That is not what I’m saying. What I’m saying is that. Writing long form, I find to be really, really, really difficult. And in our community there’s very, very little writing by the by people, you know, it sometimes happens, right? I’m not talking about you. I’m not talking about Mark. I’m talking about the electronic literature community. I’m talking about in the future of text in general. Right.
Speaker5: I.
Frode Hegland: Thought you wanted to coauthor the ones where you’re listed. If you want to coauthor this, that’s fine. We can do that. Okay, I’ll open it. It really doesn’t matter to me. We have another month. My priority now is really to try to find out what should be on the map in XR, because that is the thing that we’re building. So, yeah, Mark, if I send you a word version, can you please put it in overlay for us?
Speaker5: I mean. Yeah.
Mark Anderson: So just so I also just just so I know it doesn’t matter too much for the drafting stage because it will be an ACM. But I take it this is the paper you’re going to put into clauses human workshop. Is it or is that another paper you’ve got?
Frode Hegland: Okay. Denny is writing here. I had thought you were leading the team paper. I thought the demo paper was the team paper, because. Okay, but. Okay. So what do you mean by team paper, then? Because the demo is the team work? No.
Speaker5: And our first link.
Dene Grigar: We’re first laying these out. We had a list of papers. There was going to be a long paper that you were going to, that we were going to lead. And I decided we also needed to have the demo paper. So I said, I’ll peel off and do that one instead of the spatial hypertext. And I dropped that one with Rob Swigart. And I started the demo paper with the idea that the longer paper was going to be yours. And now that’s not happening. That’s fine. I just need to know. So now the demo paper is the the paper for the team. That’s good to know because I had no idea.
Speaker5: Okay.
Frode Hegland: The long form papers were passed. Schedule. We can’t submit that. We only have blue sky. That’s all that’s left, right?
Speaker5: You.
Mark Anderson: Can put a six page into the practitioner, which this probably I mean, that’s where we’ll get punted to if it turned up. But so it’s a six page. It’s basically you can do six pages. My feeling is this close to the wire and with all the other stuff we’re trying to do, I, I do my, my saving grace would say, well, actually, we want to keep our powder dry for the funders. Okay. So if there are a few, if there are a few sort of curtain reveals, we want to put, we’ve got half a page left on the two pager, and then I and I think we’re done because although we’ve got another couple of weeks, it just bounces to the next deadline. So I think if we haven’t got a clear form of a six page paper we want to write now, I would punt on that, to be perfectly honest, but I’m open to other views.
Speaker5: Okay. All right.
Frode Hegland: I will.
Speaker5: The DNA.
Frode Hegland: You are in charge of the academic side of things for this project as far as I see it. So if the way you feel the paper should be handled is the way the paper should be handled, that the fact that there are misunderstandings is not really a problem, I think. But I’m here to be leading the design side of things. As far as my commitment to Sloan, I feel we have a commitment to produce a report for them. I think it’s in my mind it’s more side project to do the academic because that will support us in other ways. So right now I’ve really been focused on what we’re building, not how we’re writing it up. I’m very happy to change the high resolution thinking into a more of a team paper, but also in terms of that at this point, everyone, what kind of contributions would you like to make to the team paper? Asking particularly Fabian and Adam, who isn’t tied to another paper. Well, Adam is tied to the market paper. But what about you, Fabian?
Fabien Benetou: I mean, I guess the, the most obvious thing would be the technical aspect of WebEx or how I imagine most people are not aware that it even exists. And it feels like it’s like a pretty the most direct way. I know hypertext doesn’t mean the web, but it tends to happen to be a pretty pragmatic substrate if you want to interconnect documents. So I would probably I mean, if that’s useful for this crowd, I would be happy to do like a short introduction to WebEx or and why I believe it’s relevant here.
Speaker5: Yeah, I.
Frode Hegland: Think that makes perfect sense.
Dene Grigar: So here is a timeline for that two pager. And that’s that was going to be do the 26. I was shooting for the 26 even though there is an extension, because I have an exhibition that I’m putting together for Victoria and I’m taking Andrew with me. And James Lesperance. And so I’m already turning my attention to setting up that. We start our training next Wednesday. I’m on a timeline, so if we have anything we want to add to the two page paper that has been in the works for the past month, please send that to me today so that we Mark and I can get this into the overleaf. So Adam and Bobby. And you’re invited to be on this paper? I did not know it was a team paper, a. Peter, please send me anything you wish so we can add it and get it done. I need to have this done this week, because I’m turning my attention to another project that has been long and going right. So. Yes, Mark. You’re muted again.
Mark Anderson: Sorry. This is this is to try and avoid me over talking over people, right? Yes. Very happy. I mean, I can put everyone on. The only thing we’ll need to bear in mind is we put about four more authors on. We will probably lose about a third of a page in content. Not that that’s a problem at the moment, but anyway, just just just to bear that in mind, it’s it’s the nature of this awful thing. You’re trying to put a, you know, a quart into a pint bottle with a, with a paper, and you’re always trying to, you know, squeeze stuff in. Okay. Thanks.
Dene Grigar: Yeah. The reason why I didn’t think this was a team paper is because it was in a channel with just Mark, Andrew Frode Hegland me, and I did not mean to exclude anybody because I thought the team paper was going to be photos paper. I’m happy just to leave it as it is and put your names on it. I mean, I’ve got 72 articles published in my life, so I don’t need to worry about this. As for any kind of tenure and promotion. It’s a it’s just laying you out. Right. And we did not promise Sloan a paper. We promised we’d be at the conference giving a demo. But when we do a demo at ACM hypertext, we have to write a paper. So this is a paper. To tell me if you want to add something to it. Can I have it? At least by tomorrow morning?
Frode Hegland: Denny.
Speaker5: A to.
Frode Hegland: I believe repeat. I’m perfectly happy for the high resolution thinking to be a team paper. Right. I thought it was the demo paper. It’s a misunderstanding of perfectly happy for that. I’m going to open it up. But in order to know what we’re doing, I really think we need to focus on what’s going to be in the map view. So please give me a few days, I’ll share it with everyone in a better format. And then I think we’ll be extremely relevant for Fabian to have a section in WebEx on webXR in there, and so on. I just don’t feel it’s ready because I’ve written so many papers, obviously nowhere near as many as you. On the promise of doing this kind of thing in XR. This is the first time we’re actually doing something, so I really would like to have more, not just we should be able to we should be able to, but to say we’re now working on this and this and these are the issues around it, which I would like to talk about that as I’m perfectly happy for that to be the team paper.
Dene Grigar: Okay, so my deadline is I need this in by Monday. All right. So whatever we decide to do, Frota whatever you do with this paper that I have written, with Mark’s help, you know, do whatever you want with it. But it’s going in on Monday, okay? Because I’ve got to get this done and I will not be in Europe with you folks.
Speaker5: The.
Frode Hegland: Yeah, the demo paper that I have seen, I’m very, very happy with. I look forward to seeing the images that you and Andrew are producing today, and I completely agree with you. If anyone else has anything to add, please get in touch. I don’t have anything to add. I’ve gone through that with you guys, so that’s all very good.
Speaker5: For that paper.
Dene Grigar: Okay. All right, so the two page paper is done except for the image.
Speaker5: Yeah.
Frode Hegland: Is there anyone in the group now who would like to add anything? Or forever hold your peace?
Speaker5: Okay.
Frode Hegland: We are married to that paper, right? Brilliant. Now.
Speaker5: Andrew.
Andrew Thompson: Is. Is that my cue? That means.
Speaker5: Sorry. Yes.
Frode Hegland: Just saying your name. Right. Probably not very contextually useful. Yes, please.
Andrew Thompson: I I’ve pieced it together. Yeah. So we’ve got an update this week. Technically, it’s been two weeks on the mostly the outline being added into the document view, but there’s quite a bit of sort of minor stuff in the background as well. I guess the first visual difference people notice is there’s a menu bar now at the bottom of every document that loads in it, it’s matched on the design Frode put together a couple of weeks ago. It’s got all the buttons, but they don’t all do something yet. So there’ll be things like tags and map that are just there working on it.
Andrew Thompson: But we also have a focus button there now, so no pop up menus with this current test. Adam would be happy. And it is, it is very streamlined. I like how it interacts. As an outline. In read mode, you can toggle between read is what you guys have seen already. Outline now has all the headers essentially rendered in and they’re all links. So when you select one it’ll jump back to read mode and scroll the document to that specific header. And you can, you can read there. Plus it does a little like bounce animation just to draw the eye. So you’re not just lost because sometimes if you’re near the bottom of the page the header won’t appear at the top. It’ll be sort of halfway down the page because of scrolling. Which brings me into essentially the last bit, which is a whole bunch of little bits of polish that I’ve put into the document and the outline view. I kept thinking of new little things that I could add, and I’d be like, oh, this will, this will take just a couple hours. And then it takes longer than that and then introduces like three bugs that I then have to fix, each of which is introducing more bugs. So it took a lot of chasing but things work smoother significantly than the last document update. May not notice it, but if you compare the two, there’s things like the clipping works a lot better. Scrolling is smarter, things like that. And then there’s a bunch of bug fixes. That’s pretty much been it.
Frode Hegland: Yeah, I did notice Andrew. I didn’t notice the specifics, but it just worked better.
Speaker5: I.
Frode Hegland: Noticed that you at the bottom now of the document, you have a bar of reading outline map. That’s a really interesting issue, because the way that I’m thinking about outline in the software, the traditional software and building outline is kind of a mode you go into to see either just the headings or headings and highlighted text or headings and something else. So that then has a subsection of what should be there. Now what we’re building fresh here, it may make sense to more combine the read and outline mode so you can more dynamically expand and collapse a little bit. But then the question is what you should see. And the really big design issue that is going to be part of what I’m going to talk about after you and after Fabian is how do you go from the map view over to, let’s say, Adam’s view or Fabian’s view? Right. It seems to me that the map is an environment. It’s not framed as such. So that’s something I look forward to discussion discussing. Adam, please.
Adam Wern: Yeah. My comments were on similar things. The current prototype is I really like the polish and the and the kind of thought that has gone into it looks very smooth. I haven’t tried yet, but it looks very, very slick. So. And a big kudos to Andrew who’s pulled that off in XR. It’s a quite an accomplishment. But the stepping one taking one step back we are now kind of having a dual mode read and outline. And just looking at the video, you see that there is enormous XR space available to me. The obvious solution would be to have the outline and the map on the side to the text. Of course, you can toggle it off and back, but we’re not using the XR space. With the with this kind of design, it feels more like an iPad app or a very, very kind of even 90s like folding. Folding text is a 90s code editor or text editor thing. So from now on, I think we should kind of put things side by side and not make it too hypertextual in that or interactive in that it changes modes all the time. I could easily see the outline on the side, all the, all the headlines on one side, headlines on one side, and the actual text.
Adam Wern: So you see, you know where you are in the text all the time. The good thing about XR is kind of relieving the working memory a bit for us, that you can see where you are and read the detail at the same time. If we do it too much as a kind of interactive, that you shrink it to outline mode and then out to text, then you suddenly, as you use, you need to remember much more and use a working memory. So that’s my provocation with that, that we should use it, use the less of tabbed pages and more of actually doing a real XR expansion. I hope this doesn’t sound too critical, because I’ve. I’ve been away for for a while, but I think now is the time to use the extra space a bit more. That’s my provocation. I will if you like it. Of course. So we don’t just. You do an iPad app that could have been done on a regular screen, but actually use zoom more. But I’m sure that the implementations, it’s very good under the. And they can use for the modules that we expand out over long rant. Sorry.
Frode Hegland: That’s exactly my problem. Thinking exactly that, because when you’re in the map view, it shouldn’t be in a rectangle, it should be your entire space. So if you are working in a traditional environment, it’s kind of easy. Put a tab at the top. But when you’re working in XR, you have no natural affordances, so to speak. So that’s why it’s interesting, you know, especially the Fabian sphere that we have. Right. What will that now mean? And how would you get your controls? The only way that I’m meant to be able to deal with this is just ignore everything. So, you know, in the map view that is 360. How do you interact with that one? Adam and Mark.
Adam Wern: Mark, you have your hand up and I’m sorry.
Speaker5: Okay? Right.
Mark Anderson: Me? Yeah. Now, I really like this because I just put in the sidebar. I think I like that there is a nice. We’re now getting to the provocation of, well, okay, so how many things go in the thing? Because you’ve got the overall environment, I think if you’re certainly if you’re not in are where there are, then you’re juggling the, the external environment. If you’re in a fully as it were, immersed environment, I it’s all it’s all connected. Then there is an interesting thing about whether you have, in a sense, views within views. And this comes into play if you do a decomposition, my gut feeling from working it with sort of maps, albeit not in VR, but doing this sort of knowledge decomposition tool, is that it doesn’t necessarily need to be complicated, because broadly, you know what you’re doing if something appears, if you if you get a box here where everything else is just a point in space, you you get it because you ask for it. So in that sense, I don’t think it has to be it has to be confusing. It may be sort of esthetically different, but that’s a different judgment. So I don’t think we need to be worried in, in this. And it, it may well be and I say may well be because I genuinely don’t come to this with a, with a fixed view of what, what should happen is that as you explore or decompose, pick whatever term you want. So you start from this and you start maybe with a document of what you depends on what you’re trying to do, whether you’re trying to find out more, say, structured about the document or you’re seeing about how the document sits in the citations or the references around it you probably are going to do different things.
Mark Anderson: And I can imagine it’s also the case that you may have at one at the same time, some parts of the view really quite decomposed, and other parts sort of very much at the, the high level. And to me, the joy, the joy actually of the Excel space is you can sort of do that. I mean, I know it has to be built and it’s complicated, which is why we’re getting at it in small steps, which is entirely right. But but what I’ve always sort of seen in at the end point of this is this ability to just basically play with it as you want, you know, it’s like a Lego set. You, you, you, you can make a train, you can make a house, you can make a dragon. It doesn’t really matter. It’s what you want it to be. So the bits with the bits that I think we’re exploring is to allow people to do that exploration. I don’t know if that’s just muddied the water. Anyway, that’s my sort of hot take on this. And just to say, again, I really like the video for this week, and I haven’t seen the demo yet, but I will get I will get to it. So thank you Andrew.
Adam Wern: Adam. Yeah. Yeah, I just talked to the. I think personally, what I would like to have is the kind of a space where you can have multiple things open at the, at the same time, could be different things. Even having two maps, two different maps or one document, one map. So can a system that is slightly less modal or at least the overall thing is less modal. If I I would like if I, for example, the library, I want to see the library have it open and perhaps close it if I don’t need it. But I want to have it open on the side to be inspired or see a collection of documents over there at the side and just pull one in or but I don’t want it to disappear. I don’t want it to be a library view and suddenly get teleported into kind of document reading view and then overview and then mapping view. It could be done that way. But for the most part, I want to to have a much smoother transition. And and the or no transition in that they are both available at the same time multiple documents, multiple notes, multiple collections of a sort. That would be my ideal VR space right now as I see it. I don’t know about you. How you.
Speaker5: Over.
Frode Hegland: He certainly challenges to do that interaction wise.
Speaker5: Absolutely.
Adam Wern: I would say that it’s in a way easier. Handling modality is really, really hard. Kind of doing the hard transition. Look at Fabian’s work. He’s kind of he’s having multiple things all over the place. Being able if he had done kind of ten different modes for everything, I think it would have been harder. And and the same goes goes for my prototypes. I just put everything out there. And that’s also the beauty of XR. It can. It can of course be too messy. But the the opposite being kind of. The constant change of modes is even harder for us. I think it’s like going to a new room and you forgot what you were doing a moment ago. That’s my thinking principle. Yep. Over.
Frode Hegland: We will take that further in the design discussion. Does anyone have any more specific comments on Andrew’s work today? Mark, did you want to speak what you wrote? Maybe.
Speaker5: Sorry.
Frode Hegland: I’m just saying if anyone has any more specific comments on what we saw with Andrew’s work that would be useful. So I’m saying you’ve been making some notes. Maybe you want to speak to them because I think.
Speaker5: Oh, no, I well.
Mark Anderson: I was just I was just commenting on the fact that I think an interesting thing that and this is difficult because clearly I’m not the person making it. So when I say simple, of course I’m not coding it. But when we talk about I was just listening to him say, well, you know, if I don’t want the library anymore, maybe I want to close that away. Now when we did this in 2D window, sorry, that’s what the add and add was in my note. Then you know, we tended to do it with draws or that kind of metaphor draws or tabs or this kind of thing. Of course, seems to me that if you’re an Excel space there’s no reason that you that you can’t collapse effectively a display or part of the environment back onto essentially a point which can be styled however you like it. But you just said, put that away, you know, so it it fits with this expansion decompression part that because you know that that that point there however you envisage envisage it is the library. So if you want to go do stuff with the library you can put it out there. It’s there, it’s visible, it’s in your attention space. And if necessary, because it’s, it’s another point, you might want to move it over there, you know, because that’s now sort of makes sense for the, for the thing you’re doing. The nice thing about that, if that’s in indeed at all possible, is that it breaks it, it it breaks away from the sort of hard frame that we have working in sort of 2D windows, as we do where you’re you’re in this innately rectangular space, and you tend to augment things by playing around at the edges.
Mark Anderson: The edge in, in our Excel space is broadly anywhere we want it to be. Or, you know, around behind us, out of the way. So there are all sorts of things we can play, but I don’t think. It doesn’t feel natural to me that we will want to necessarily have the sort of tabs and drawers metaphor that we’ve all got used to in sort of the last 20 years of UI, because we don’t need to do that. And I just, I just go back to this notion that I, I think it’s easy to forget when doing sort of these mapping things that broadly you end up with what you ask for. So if it looks slightly different, it’s because what you ask for, I mean, once you understand the basic affordances and interactions of the space which which obviously is that’s the learning part. But beyond that, once you know what’s there, if that thing opens to a list and that thing opens to, I don’t know, some dynamic graph or something, that’s cool because that’s actually what we want. And that’s the thing that’s really, really hard to do at the moment. And so, you know that that’s really what to me, what brings the whole thing alive. Thanks.
Frode Hegland: Yeah, absolutely. Those are the issues, right? So I’m thinking, Fabian what you posted earlier today do you mind showing that now and then we kind of return to this discussion after that? Because I think this discussion is relatively could go on a bit.
Fabien Benetou: I mean, I’m always happy to to show stuff. So I will do the usual demonic thing of trying to share my screen. So good luck.
Frode Hegland: We all.
Speaker5: Crossing our.
Frode Hegland: Fingers, but there’s no pressing.
Speaker5: Okay.
Fabien Benetou: So let’s see. Please let me know if you can see it.
Adam Wern: Yep.
Fabien Benetou: Thank you. So what? What for now. So I don’t.
Speaker5: Hear. Danny.
Frode Hegland: Look at this. This was randomly in my pocket. It’s been a while since I’ve been wearing this jacket. It’s the ticket back from Vancouver.
Speaker5: Right?
Frode Hegland: Yes, please. Fabian. Go ahead without interruptions for me.
Fabien Benetou: So I’m used to it by now. So I take it as a little teaser. So what happens is Microsoft did a demo where they basically used, I think, PTC, where somebody designs actually gamepad, as you can see on the right of my screen. And then you could see an augmented reality and like, yeah, that’s useful, but I don’t see how innovative that is. And I would sound a bit silly if I say this and I can’t prove it. So I basically tried a bit today to re-implement it. So here I’m doing it with a 3D model which I see it on the screen, and then I can see there what is interesting is here I drag and drop or so I can control it, of course from my screen and it moves there. But what’s more interesting, and which will lead to something more interesting, is I drag and drop a 3D model on my desktop, and I see it there in augmented reality. Well, I did this, I thought, I think that’s in itself interesting, but for this group, not so much. And then I did another version here where what I drag and drop is a PDF, so I have a 3D model on the left, and then I use fraud. Preparation earlier. So I literally drag and drop by my desktop on the file. How do you call that this thing? Not the file system itself, but the file explorer. And then what it does is it tells the web page, hey, there is a new document, it converts it to images, and then I can manipulate those images with the same interaction as usual, namely left hand to pinch and then you can imagine a UI around it. But I thought it was interesting for this group in the sense that it’s worry. I guess we have quite often about how do we get the down PDF in the headset. So that was that’s one potential way, basically.
Frode Hegland: That was spectacular to see for many reasons. One of them is obviously my dream is that you view a PDF in that space, and then you do a thing, and all the metadata from that is an Andrew space.
Speaker5: So. Yeah. Wonderful.
Frode Hegland: Peter, please speak if you can. Unless you have to be quiet for the sake of dialog and for the sake of
Fabien Benetou: I don’t know if I don’t see any raised hand, but to balance on what you said. I use PDF as an example. Other files would work, and if there are extra metadata a siloed core or part of the metadata to pass within that file, that would work. It’s and it’s not, I guess like getting the, the Mime type or the file type and then do actions on it or add UI specific to that kind of file is not. I don’t want to say it’s easy. Of course, it depends what kind of transformation you want to apply to that file. But it’s not complicated. It’s. It’s quite like it. It makes sense that how you want to end handles files basically. And also tiny detail. I’ve done this locally like it’s on my Wi-Fi at home because all the privacy and security and whatnot. But one could imagine do the same thing online, for example. Last week there was reminder of IRC loan in order to have like remote file systems so we can imagine us, all of us here having a different merge remote file system where we have, for example, the publication from last year of the hypertext conference. And then we share that workspace, not just the files online, but then that’s a kind of I don’t want to say physical, the visualization of that workspace. And it doesn’t have to be local. It’s my point. Local is because I think local first makes sense, but it can be also on the on the road of web.
Frode Hegland: So I’m so glad we took this kind of detour, because it helps open up the thinking of what we’re discussing around Andrew and Map and so on. And then Danny’s comment here in the text of the playing Scrabble. I think that’s really worth thinking about because not necessarily for the game itself, but Scrabble is in a scaffold. And, you know, we’re talking a lot about loose interactions all over the place. Sometimes we do want to work in a scaffold in XR. So, you know, with taking that on top of what Adam said about everything should be in the same space if you want to complicates and opens up the discussion again.
Speaker5: Okay.
Frode Hegland: Without dropping what you’re saying, Fabian, would it be okay if we now move on to the design bit? Because I think what we need to do now is try to agree on some vocabulary. Is everyone okay with that?
Speaker5: And.
Adam Wern: Could I suggest a thing? That would be interesting to test for next week or so for for Andrew.
Speaker5: Yeah.
Adam Wern: So that that would be to combine the two views in the video. The library abstract view with the document on the side. So you so we get a feeling for how would it feel if it was less modal. But everything in one space in that you have your library, you have the. Perhaps subcollections, and then you could pull out or send out a document from that list or multiple documents, perhaps. And later on, from the documents, you could pull out a few text snippets or quotes. But that’s for further on work. But just combine.
Andrew Thompson: Talking about having them rendering at the same time. So the library is like kind of a background with all of the the text and you can grab stuff from it. Is that what you’re talking.
Adam Wern: Yeah, basically. Now the library view is a bit darker in in your posted video, there are two main modes the library view, where you select the two column things with the abstract, and the library collection, the collection. And then you click something and you enter a kind of a reading view. It would be very interesting to have them in the same view, to see both the library and the and the one document or multiple documents, just to to get a feel for what it would feel like to to be in one space and not go in different total like modes of of being in exile. It’s a suggestion.
Andrew Thompson: The group will have to come up with a redesign for the library if we do that, which is fine. Because I have like a debug thing that I do when I’m developing, where I have them both at the same time. And it’s incredibly annoying because I’m always accidentally selecting new documents when I’m trying to grab stuff. So it just constantly pulling documents in because the library is like, right there. But I just do it because it’s faster to test stuff. And like, I can’t imagine that being a nice experience. Unless we redesigned the way the library currently works, which could totally be fine. We’ll just have to pick a direction there. Is that something you want me to design, or is this like. Because usually the design stuff comes from the group as a whole, and then I implement it. Yeah.
Adam Wern: It’s not for me to to it’s just a suggestion that having the library next to a document is a very important kind of non-modal user interface that I, that I think would showcase XR much more than having kind of bite size, iPad sized smaller objects. On the side. I think it would be much more wow, to be able to see your library, the collections, and see multiple documents at the same time and not switching views. And it’s, it should for the technical part. Andrew. I’m not is it, would it be hard to kind of just. I imagine that everything is done in three.js as nodes in three.js. Can’t you hang both the library and the and the document in the same root node, and have them exist at the same? Or is there kind of a fundamental problem in there?
Andrew Thompson: Principally, yes. But like the layout of the library is not like a node. It is. It is taking up a bunch of space. It’s essentially another scene view. It’s the sphere on the wrist that expands out. So it’s partly built around a sphere.
Adam Wern: Okay. So it’s not in the same scene at all. It’s two different scenes.
Andrew Thompson: So it depends on what you’re asking. Like, technically it’s in the same scene. But it’s it’s laid out like a separate scene, which is why I say the design would have to change. I can I can render them at the same time. No problem, because three.js lets me do that. It’s just that it gets in the way with the way it currently looks. So we can, we can change some stuff about how it looks, the way things are placed. Render it differently. And then I could have it alongside stuff. I’m not sure if. What I’m. What I’m saying is making sense, though.
Adam Wern: Well, we can take that offline if we if we need to. But the most important part is, is the kind of design philosophy here. If how modal the whole interface should be in the longer, longer term. Is it lots of small sub modes like reading mode, library mode, collection mode, map mode, or is it more a unified space? I would argue Ferm unified space, but that’s kind of for the project leaders or the group to decide.
Andrew Thompson: Yeah, it’s it’s more for today.
Frode Hegland: Sorry.
Speaker5: Sorry, Tom. Okay.
Frode Hegland: Andrew. Sorry.
Andrew Thompson: Yeah. It just a bunch of I mean, each space is run by different functions, but they could, in theory, interact with one another just fine. Like I said, this is not a technical limitation. This is just like design didn’t go that way. Which is totally fine. So if we change it, which I have no problem doing, we have to change the design.
Frode Hegland: So let’s go into exactly this discussion. It is a crucial discussion, and there’s kind of two layers to it. I’m going to show you what you’ve all seen a million times. This kind of map view, right? So when we have many views, but in a map view, we have to decide what’s going to be on the screen. We have to decide what interaction should be available. So I’m trying to really, really simplify. And the question you just brought up now, Adam, has been really bothering me for a few days. So the way that I think, I personally think it should be done for discussion is as follows. Recall the map or timeline citation. People globe those things. Let’s let’s call it environments. Right. This may not be right, but this is what I’m presenting here. So The review would be an environment that’s so obvious I forgot to put it up. But also, ideally, what Adam and Fabian are doing separately would be just a different environment for the same data. So right now Adam, I am thinking about you somehow. Go out. But I don’t think it’s useful to go into a kind of a desktop thing and then go back in. Ideally, you should be able to be in a timeline view, do a thing and it’s a map or do a thing, and you’ve got a people centered view, so everything moves in front of you. But this does not currently consider having a map, a timeline, or a citation at the same time. I do think that is a very, very good and powerful idea though. But in terms of the nomenclature or whatever we should call it. The environment is basically the space. And then I use one environment as an example, our core one. There’s two things in here. And this is what we’ve been discussing, some of us offline all week. One of them is what the heck should we show? And I have some lists here. I think we should just call it elements.
Speaker5: Because if we.
Frode Hegland: Call it nodes or if we call it anything else, it might be a little off putting or a little too technical. It’s just stuff. You know what stuff should be there anything from a title of a document to someone’s name. Right. And then I’ve been working on I previously had layouts for arrange, but layouts is nice to have a saved layout, so that goes up to the notion of Uzbeks, a word we should probably not use outside our own community because it needs explanation. So I can imagine with the language here that you go into the map view, for example, you do stuff and then you can save a workspace or you can save a layouts, a thing. So that and this is also a philosophical question. When you do that, are you saving it for that data only, or are you saying the saving the layout style? Let’s say that you want to have a map view that is pre-populated by your own keywords for the sake of arguments.
Speaker5: You should probably be able.
Frode Hegland: To save that as a as a thing. And we need to decide what to call that. Maybe that’s called a workspace. So when you open up other stuff into that, these keywords are there ready to be connected to. Right.
Speaker5: And that’s it.
Frode Hegland: That’s that’s all I have for the language. Now, I wonder if you have comments on the language for how to label these things.
Speaker5: Yeah. Go on.
Frode Hegland: Whoever I.
Speaker5: Can’t see.
Dene Grigar: March 1st, March 1st and I’m second. Okay.
Mark Anderson: Right. I’ll try and be as succinct. Fat chance. Okay, so my gut feeling is environment is the whole thing. So it’s modal. It’s a whole thing you’re in. And my question about whether we can nest environments goes back to this point that to me all these things. So I the question is about elements and giving them a name is how changeable they are. See, to me, I just I think of it. So you put a point in this environment and you tell it what it is by the metadata that arrives. So if you tell it, it’s a text bit or a text sort of reading frame, it becomes that if you tell it, it’s now, I don’t know, a pink elephant, it becomes a pink elephant. It doesn’t need to know what it is. So I think what you’re talking about elements to me are more like the metadata. I mean, the, you know, the classes of things that you’ll have in the metadata. It’s a I know it’s a sort of wafer thin difference in a sense, but I think it fundamentally changes the flexibility because I think if you if you start. Sort of having these fixed elements. It just makes the whole thing more complicated, because we know we’re going to have to put data behind all these things. So if we just say that, okay, well, a thing that is a timeline, we’ll expect to have the following information. Oh, this thing can not only have a timeline identity, but it might have a personal identity. So the timeline might be the person’s significant things that have happened. So I’m not saying the elements aren’t necessary, but I’m not sure I’d use this approach or I wouldn’t treat them as things within the map.
Mark Anderson: I think the more interesting question is or or maybe we need we need a separate name for the sort of top level environment. In other words, if you close everything back to the to the dot in the middle of the screen, that’s your environment in a sense, in certainly in the immersive sense, whether within that we are technically and indeed would wish to be able to, in essence host another viewable environment like a snippet of a timeline or something. Or we might use Leon’s idea of almost diving through a window into another one. I don’t quite know, but my gut feeling is that we don’t we don’t want to be putting in structure where we don’t need to. So I think what we want to do is we want to put the structure in the metadata. So we’ve got all that. We’ve got all that classification and it can continue to evolve. Because the other thing is, if we try and make a list now, the one thing you know is by next week we’ll find there was something that either shouldn’t have been on it or needs relabeling. And that’s not because we made a mistake today. It’s a discovery process. And that’s much more flexible if just if just pushed off into the metadata space because it’s easier to make those changes because you just go to the stored metadata or, you know, you add one to the to the list. Anyway, I think that’s it. I’ll stop there.
Dene Grigar: So I wanted to talk. Can you go back to your to the doc you’re showing us? Because I like the terminology you’re using. Mark and I were going back and forth on slack, which I thought was really interesting about terminology and mark your points. Well taken in my in my experience with storage space, early storage space nodes were the actual object that you can, you know, that held the lexia, which was the text. The nodes and the lexias were separate, separate entities. And in this context we’re talking about on Monday, we were collapsing the two terms and that was bugging me. So I like the term elements. Anything that’s that’s interactable in the space is an element. Right. So interactable elements makes a lot of sense to me. We can show them and hide them. So it could be stuff in the menu as it, you know, as, as a listing in the menu. But then there are the expression in the actual environment. I also like the idea of the, you know, when we enter into the VR space, we’re entering into an environment. And within that environment there are choices. Right. So you have we we’re not going to use VR aspect I think is what you said, which I thought was was useful. But we’re going to use the term perhaps landscapes or workspaces. Right. Workspace. So we, we land in an environment. We work with the workspace, we choose one of 2 or 3, and then we interact with elements right in that environment. And that terminology makes a lot of sense to me. It helps to have the same language, I think so, thank you.
Speaker5: I mean, Monday.
Frode Hegland: Was really, really difficult because it had to be because it had to highlight how specific these language things are. So. Yeah. Okay. Now, okay. In terms of interactions where I am at, I really think that because if we try to do interactions for a timeline citation, people go blah, blah, blah, it’s going to get I think each one of them will need some kind of specific interaction affordances. Like if anyway. I still like the idea of elements in a range, being two things that you can somehow interact with to get suboptions such as a range will be what you see here, a range is column and so on. Right? I’ve been very inspired by the whole body in XR thing, and I really, really like our wrist tab. Fabian, for the hundredth time. Thank you. So what I’m thinking about is. But when you’re in a map. Imagine if you have two balls on your arm, right? I don’t know if you can see me, but you have one here and then you have one on your. The bendy bit here, the elbow. So if you touch the bendy bit on the elbow, then you get a menu of options here. What do what elements to see. If you touch the ball on your where you have your watch, you get all the options for how to arrange it right. And that is the beginning. Thinking of how we can do that. I see Fabian has a comment and I look forward to hearing that. The only thing I want to park on the side is how you go back up, or rather how you go then into a timeline view or citation view without having to go to some kind of a desktop. Because I agree with Adam very much that that would be quite artificial. So how you either change what you’re in or add to it would be very useful. Fabian, please.
Fabien Benetou: Yeah, I will. I’ll show you a piece of art that I made myself. I’ll try not to shake too much. So that represents a head and hands a hoop, as you can recognize. But the idea was, I think fruit is one based on a discussion we had not too long ago, that indeed, the wrist is useful. I think it’s powerful because it’s tangible, literally, like we can actually feel it, and yet we’ll we know where we are. And also, it’s properly tracked and like elbows or knees or whatever, which probably will be in the future. But today. But why I show you my little drawings is that I think what is interesting is if we do rely on our virtual body well, it overlaps with our actual physical body. Having basically an out of body experience temporarily is interesting in the sense that I’m here, I’m in Xoar, I press a button or whatever, and then I step out or my body represent it very minimalistically with a sphere and virtual hands and whatnot is there. And this I can like bind things on it, like bind actions on the hands, the fingertip, the head, etc.. So we’re that’s why I use the out of body experience as a kind of a metaphor, because I think there we get affordances on this that is intangible. Yet because if we don’t move it, it’s. I don’t think it’s practical, basically. I haven’t tried this, so it’s not. I don’t know how how practical it is, but it sounds both feasible and I guess quite easy to understand and practical to build on. And I think it’s it’s generalized enough, generalized general generalized enough that we can do quite a few things there, like how can we use our body as as shortcuts and make it configurable basically.
Frode Hegland: Yes, somebody must have. By the way, just as an aside. So you’re talking about the word arrangement. I like that very much. It may not be the same thing. The option here is when you have selected certain things, you can then arrange them as a column or whatever. But the thing, the way that you save the layout of the space may not be based only on that. That’s why I’m not sure if we should use arrange and arrangements. It’s it’s difficult.
Dene Grigar: I think we need to. I mean, I see what you’re trying to get to with that. So maybe the word arrange and arrangement is not right, but maybe it’s I mean, I know you don’t want to use layout, so. We need a word.
Speaker5: Okay, so.
Frode Hegland: Let’s nail this one on the head. So the notion of this is you go into let’s stay with a map view. You do this that and the other. Maybe you bring in external computational elements. You bring in your own keywords that aren’t necessarily in the document. You do things here or there or whatever. There’s two different things you can say. One is the specific document you’re in to that goes with the document, so you can open it anywhere. The other one is everything that isn’t the document itself. So when you open a new document, all these things will be there. And I think it was Andrew who used the term workspace, and that just seemed kind of to fit. I don’t know what you feel about. Everything. That isn’t the data. What do we call it?
Dene Grigar: Well, I guess I want to go back and say I want to get like from point A to point D, right? So we put the headset on, we enter into what Andrew has produced. What do we call that? Well, I’m calling that the VR environment Ren the XR environment. So here we are. Now we have in front of us the library and all these all these things, right. The possibilities. And but we may not want to use the environment as it’s, as it’s currently given to us. It’s a default. We have a choice between different types of experiences. Right. So view specs was the term that Marc put forth which I thought was good. But I’m okay with something else. So what is that going to be called. I call that workspace. Pick your workspace. Now once you pick the kind of space you want to work in, and believe me, I don’t want cards. You know, I want the more open environment. Once I pick that environment, now, I get to start working, right? So now we have these these elements on the workspace.
Speaker5: I just want.
Frode Hegland: To ask you about the language there, Dene, because you said once you pick this environment, but you wanted to call it a workspace. So I’m writing down what you said. So we enter the Excel environment, which is basically a library. It’s a list of everything you have, right? That’s the default. You said very, very much. Agreed. When you then choose to go into read mode, map mode, blah, blah, blah. Do we call that an environment or an experience or what do we call that?
Dene Grigar: Yeah, that’s a good point. So I’m I’m actually doing more more backup. I’m before we even get to that point. What we need to be able to be clear with, with our people at the ACM hypertext. And the demo is the language we we use, right. Because they’re not going to know a damn thing about any of this. Many of them never put a headset on. So when they put the headset on, they enter into this environment. Right now they’re in the XR VR environment. Now we give them a couple of choices. Do you want a open space? Do you want to work in more of a closed space with the do you want to like, you know, do you want birds flying overhead? We give them some view specs, whatever that’s called. I think the term that that Andrew used were were workspaces. What works. What kind of workspace do you want. Pick one. All right. Now the workspace loads. And now you see the the library. And I think what we want to do is not let them land right into the library but say, you know, have a, have the ability for them to choose the library. Right. And so click on this and your library unfolds. Right. And the library is there. That’s an element. Right. And each individual item in the library is an element as well. Those are all interactable elements within that space. That’s that’s a that’s a way I’m trying to conceptualize this. And then you have once you choose your library, your menus there and you have read Map timeline, you’ve got all these choices. Once again, the menu is an element. And all the elements, all the things inside the menu are elements, but it is a menu right inside the environment. That’s expresses a workspace. Does that make sense to everybody?
Frode Hegland: Mostly I’m trying to do it hierarchically here. So when you put your headset on, you are in a library. You shouldn’t have to choose a library, but you have the option to open another library. Your basic library should already be there, right?
Dene Grigar: Yeah. At some point, what we’re going to be doing in year two is letting people select their own library. So right now we’re giving them a library. Yeah. In the long run I’d like to use my own library. And it’s not necessarily hypertext papers. It might be from Ecids or some orchids or something like that. So we’re giving we’re spoon feeding them for year one, opening it up for year two. So I’m thinking more broader than just just this one demo because we’re talking about the greater notion of language. We’re not going to want to change the language in year two. Whatever we come up with now, we want to build on for, you know, going forth. So I don’t mean to get too far ahead, but at some point they’re going to choose a library, right. But in this case we’re going to let them land in library. But at the same time they get a choice on what that library with that. But that space with that environment is like, yeah. So people might like some people might like what you like, which is you like that really. You know, you like more of the Apple Vision Pro experience where you have, you know, stuff there in front of you and you’re in a calm environment with maybe some sound. I don’t want that for myself. And somebody else might want cards and I feel of paper, and I think we should give them those three options. Simple. Really simple. And then they pick their workspace, and now they get to work, and all they get to do is they get to play with that list of references in so many cool ways. Okay.
Frode Hegland: So this is really important that transition. Right. So you have a list of documents in a linear fashion.
Frode Hegland: How do you now get to the map.
Dene Grigar: So by map. This is what we were kind of struggling with last week. What do you mean by map?
Frode Hegland: So that you can do things like this.
Dene Grigar: So that to that I think that’s why I’m confused that that’s confusing me. So that’s what you mean by map? Okay.
Dene Grigar: Mark has his hand up.
Mark Anderson: I just quickly I think I think where the confusion comes in is so one one of the unintended problems is we’re we’re actually we unintentionally gave ourselves a problem. We’re going to give people basically a completely artificial selection of documents. They have to be in a proceedings, which is why they happen to be that group. But they’re only in there because they haven’t been published at that conference at that time. And often they may have little or no relationship. So the map element that we sort of just saw really makes more sense when the data sets a bit more developed. Or say you’re looking at your whole academic library because likely you will either know or you have glossed in some way the data such that these relationships can come apparent. Taking cold a set of things out of proceedings will have not that many connections. So I think that’s why it’s confusion and I just throw in one other quick thing is I sense here there’s a sort of there are two parts to the terminology here. There’s the sort of customer facing side, the user side, you know, this is my workspace. There are many like it, but this is my workspace, you know, you know how the mantra goes. So and that makes a lot of sense because that’s what we identify with, I think I think behind the curtain is really quite different. It is just the abstraction that okay, that that thing there. So the thing we’re going to give you attached to this point or this environment is the set of metadata because that just makes it much, much cleaner behind the curtain. So we can build. But but at the human level, we want, we want it to be a bit more formed because that’s more comfortable for us. Yeah.
Dene Grigar: I let me put this forward. So we land into the space and that is called. The environment, right? That VR environment. And then the library and that mapping thing you’re you’re talking about the workspace are the choices. Like how I, how I want that environment to be expressed. What I think is hard for me is the term map for the element that you’re showing me. Right. So there’s got to be a different name for that, because that’s not really a map. It’s really it’s really the interactions. It’s really the connections. It’s really the, the The expression of what the author, what the user is doing in that space. And it’s going to change for every user. So it’s not necessarily a map, right? It becomes one at some point, but not not ultimately. I mean, I’ve got a library in front of me. There’s a library now I want to click on one of the touch, one of the elements in it, one of the references. Once I do that, I have a I have a menu that comes up. And it has different things I can do. I can see the abstract, I can pull up the full text version. I can just see the authors in background, you know, the author’s information and the publication date. Get some of that metadata, you know. So I select abstract. Now I can see the abstract for that reference. Right. So that to me is not a map. That’s actually something else. And I don’t know what that is. And we’ve got there’s got to be a better word for that. Ultimately, when we lay out that whole thing and we have all those interactions and stuff, that’s definitely mapping. But the initial. Action is not. The initial expression of the act, is not the map. Does that make sense?
Speaker5: So I have a I have a sorry.
Frode Hegland: Anyone else have something on that first? I’m sorry. Okay. So a question is then in order to get into what will be called a map or knowledge map or something entirely different, or a timeline view or a citation view, etc.. My expectation is the user has to choose to open a paper or proceedings to go into those workspaces. They don’t just do a map of the library, entire collection by itself. Is that agreed or not? I mean, at some point they should be able to open multiple timelines for different documents and view them together and have them overlap. But please. Yeah. Mark.
Mark Anderson: Yeah. Yeah. And I say this as gently as I can. Yeah. I give it more worried by the must. I mean, the thing is, we’re doing two separate things here. Really. We’re trying to work out the script of a demo where we want to take people through a list of things in an order that they’ll understand so they don’t get lost. But actually the bits we’re using are actually much more generic. We’re only starting in a library because for our demo, we decided we’d start in a library. I mean, if that’s the only place I could start forever in exile, I feel a bit sad because, you know, as as experiment, as the, you know, the environment grows. I’ll put in what I want to see. So I just think we need to keep those two strands separate in our mind, because otherwise all that happens is we design ourselves into a corner, and it’s like going to the library and saying, asking for the book. That’s a logical next thing to the shelf in front of you. And someone says, oh no, that’s on floor four. You know, Dewey Decimal System. Thanks. And we don’t need to do that in exile.
Mark Anderson: We really don’t, because we’ve got all the information under the hood behind the curtain. We can do that. And if we want the Pink Pony to be a green elephant, we can do that. If we talk to structurally about sort of it, you know, you you can’t do this in a timeline view. You can’t do that in a map. I don’t think we need to be that prescriptive. Otherwise we’re dragging forward constraints of the past which were constraints partly of the display space and partly of the the way things were constructed. We’re dragging those forward in a way that we don’t need to. But I totally accept that for the terms of the demo, we can’t just throw everything at the wall because it’d be way too much for for the user. But I would I would urge that we don’t start in defining the demo. We shouldn’t be defining actually the, the the sort of what the space is. They’re separate things. The demo are the terms that we use to make it understandable to the new person doing the demo, but that doesn’t define what the space is.
Frode Hegland: I am not sure if I agree with that approach. I do agree that in XR we have huge opportunities, but I do think that we we do need first to look at our audience, which is an academic looking at papers.
Mark Anderson: Well, I am an academic, so I you know, I wasn’t exactly speaking from, you know, from another view.
Speaker5: No, no, there.
Frode Hegland: Was an end coming. And that they have their documents organized somehow. Of course, you should be able to open a single paper in this environment and read it from your computer or whatever, and absolutely agree with that. I do think, though, we do need to have different workspaces because they will have, at least initially, different affordances. Now, if it turns out that, let’s say the knowledge map or what it will be called in the timeline are essentially the same thing with slight differences, we can merge them. But I think that providing a completely free flowing even outside of the demo mode will be basically too much. You know, this is to help us this, like the Scrabble thing in order to make sense of things. It’s good to have some place to put things. But I do need to. I see that Danny has to go to campus. We do need to decide on a few things. One of them is at this point, do people need to open a paper or proceedings to go into the workspace, or is the entire library going to be interacted with in this way?
Speaker5: How are you talking about.
Mark Anderson: This specific demo? The whole I mean.
Speaker5: This is the the demo.
Frode Hegland: I’m not talking about a demo, but I’m talking about the things that we were building. And I do think it’s the same thing, but Sorry. Adam. Was that your hand up or were you?
Adam Wern: Yeah, but it’s kind of a today we said that we were going to talk about the mapping, and I think the mapping view looking at the Sloan grant is kind of listed as future or advanced work. And we haven’t done the basic things yet. So I would propose to hold off a bit with the mapping view, because it’s value to academics is not kind of is not fully established yet. I think listening to the recordings of you and Mark and my own intuitions as well, it’s not really clear what value it brings, but we know that kind of skimming the the corpus and reading the text and perhaps pulling things off are important to academics. So let’s nail that one first without going into kind of mapping view, because it will distract us and it will distract Andrew as well. I think especially at this point for Poznan.
Speaker5: I don’t think.
Frode Hegland: We’ve defined what a mapping view is, because pulling things off to the side in my mind is the mapping view.
Adam Wern: It’s everything we pull out a map in. In that case, I agree with you. But if it’s the thing you showed him in your keynotes or in an author reader, then it’s a very different thing.
Speaker5: The way I.
Frode Hegland: See it is that we have decided we’re going to focus on proceedings. That will be the the corpus, that will be the limit of what we are presenting. When someone is looking at the proceedings, they will have the proceedings listed as a list. It will be a traditional vertical list. The reason we call it a map is, first of all very happy to change the name, but it’s just a shortened form of knowledge map right where they can start interacting with things they can choose to.
Adam Wern: So that’s very confusing because knowledge graphs and knowledge maps are specific in other contexts. From what I know. So you’re using the same term in a very, very different way from what I do and from what I’ve read other people do.
Mark Anderson: Can I just jump in a second about the confusion arises? You’re absolutely right. I don’t think a lot of people use mapping. The people who understand mapping are probably overlap with the people who use knowledge, exploration and tools for thought, which is a fairly niche thing. It has a sort of overlap because through parallel evolution story space has similar maps. So these are open ended exploration spaces. They’re just maps. Basically, you put something just you put something on a map because that’s where you want it to be, and it is what you think it to be, or you make it look like you want it to be. And it it’s that is all it is. It’s definitely not a knowledge map because it has no.
Speaker5: Okay, okay. I accept that.
Frode Hegland: We have we’re ten minutes left. I accept that’s not a map. Please, let’s not worry about the language. We can find a better term. Right. But one of the things I believed we have decided, in addition to using the proceedings, is that we should use the XR space, not just to list and boxes. Right. So the thing space, what we are in should be richly interactive in order for the user to be able to find out whether these documents papers are worth reading or not. Right. Which can mean that for one user, they want to list all the people on the side or keywords or whatever thing they want in whatever we’re going to call this space, right? And this is what I’ve been trying for a while, and I apologize for using the wrong terminology, but I can’t think of anything better than the M word.
Dene Grigar: I can’t either. But I think this is something we can ponder, you know? I think we got we’ve got a really good thing to work on and that is the this vocabulary. And I think there’s nothing wrong with what you’ve done. But it’s a starting point.
Speaker5: Yeah, but we.
Frode Hegland: Need to get past this, okay. Because it is really going in circles. And the incredible thing we have here is the guy sitting with the beanie hat. Andrew is building this. And that takes us out of discussing this in circles forever, which is just insane, right? I’m so grateful for Danny and Andrew for making this real. Now let’s just call it the M word, right? So so what we need to we need to agree on a few things. I mean, Fabian better name. Great. Let’s use it. This is what we need to agree with before DNA runs off. User goes into a thing. We’re now calling that the environment in there. They have one library or more. They can either open more or less. When they open a proceedings. Forget about paper for now. They are in a basic environment for that, but they just see them listed in that. We’re going to call it again. The default one is the M word. They can start doing things to it. You know, they can start expanding and collapsing. They can start seeing all these different things. If that is what I feel will give the most value to the user. Because, you know, I’m developing reader and I’m finding that a lot of the stuff can be done on a 2D screen. What really uses the XR space is to be able to literally do that. So that’s what I’m hoping we can help. No n word m for map. Fabienne. I’m sorry. I didn’t even get that. You are right. That could very easily be misheard. Thank you. That’s both funny and very sad. Anyway, the thing. What it is. So I have started writing and I will share a PDF. Please, guys, in the last few minutes, just on your own, just write down the kind of stuff. You know what I’ll show you? It’s very simple. Then if you’ve got to run, you’ve got to run by.
Speaker5: Okay.
Frode Hegland: Yeah. So the elements in this word view obviously has to have the title, because our default knowledge thing is the paper. It’s got to have the paper in there. Right. And you’ve got to be able to do things like hiding it if you don’t like it, authors, date, names, keywords, location, institution, events, these are the suggestions or what should be there. And then when you interact with a title with a paper, in this environment, we need things like open. Trying to use the plainest language. So when you use I can understand that basically means you get a focus view.
Speaker5: But Andrew or.
Adam Wern: Does it or a I still argue for not the focus view first, but the kind of side by side using the XR space. A deep read mode is a very different thing, kind of where it takes over.
Frode Hegland: So what we have here is yeah, I’m fine, I don’t mind. That’s perfect. You know, point is that you should be able to expand things to see more. You should be able to lift it out of the list, as we have already had in Andrew called before. And then what you mean by open? When I say focus mode, it just opens nicely. Doesn’t get rid of the background.
Speaker5: Right. That would be.
Adam Wern: When you say background, do you mean the environment or do you mean the library?
Speaker5: We’re in. Not in the.
Frode Hegland: Library. We’re in the M room now.
Adam Wern: Okay, but it is is a list. We we saw a moment a moment ago. When we press that button, does that move away or is it. Yeah. I also need to go now. So I think this is obviously the it needs more design here. I think we can’t have anything like this. I think what you’re showing right now, I don’t think that’s the usable at all for academics. So. Okay, now, I think.
Frode Hegland: Adam, you haven’t been here for a couple of weeks, and I know you’ve been watching. Please let me explain what this is in context, because we’re running out of time. Is that okay? Because you’re kind of rejecting it without knowing what it is in context. Is that okay?
Speaker5: Well.
Adam Wern: I watched the recordings and your explanations of it and the things you posted on YouTube. I think at this point it’s much more, much more important to focus on the reading experience and not this part, as this is not the value of this is not really clear. It’s more an experiment than other things.
Frode Hegland: The value of this is to find what’s inside, right. So if I’m interested to find out, you know, like Lev Manovich is someone interesting to me, I may very well it’s hard to select here. You know, I put him in a specific space, but he isn’t there. I want to see, you know, what keywords is he related to? You know, these are I’m not going to do this every time. So I have these keywords. They’re not working here, which is really annoying.
Speaker5: But would you do that, Freud.
Mark Anderson: Would you do that with it? Go back to the demo. The point being, people are going to start with a brand new set of papers. You may or may not know who that is. You may not know who those who those people are. So unless you’re going to throw up the library with a list of all the authors, what you’re showing your what you’re showing is a mature library view, which is not which is not what we’re giving the, the the users in the demo. I think that’s what Adam’s getting at.
Speaker5: I can explain what.
Frode Hegland: That would be, but my thinking is that when they open.
Speaker5: This.
Frode Hegland: When they open the proceedings, we have populated it with all the names of the authors have written something, but also all the names of the people who are mentioned who are not authors. You know, we put them somewhere in space. This is this the notion of saving a workspace or a view space. Right. So and then they have the opportunity with the elements option to show and hide these things. And they can start playing with it and make it their own space. Because for instance, if I want to find out who has written about history, often it isn’t really tagged about history. But if I select, you know, Vannevar Bush or whatever, I’ll see where he is. That is interesting to me, not necessarily to someone else. So to be able to have a multi. Yes, atom.
Adam Wern: Yeah. So I see this as kind of an author reader experiment. And you’ve been playing with it for for several years now. Isn’t it better to do it there and not to take the high risk of doing it in Excel? And it was flat for the demo in Boston. I mean it to me at least. I don’t know how to put it, but it it feels unpolished or not even. It’s not about polished. A bit unusable at this point. And the value proposition is quite low compared to the other things at this point. What is the.
Speaker5: Audience that.
Frode Hegland: Uses Zspace for this other than having PDFs in space?
Adam Wern: Well, the things I described, seeing a library, seeing a subcollection, seeing different subcollections, seeing abstracts, picking up a document either in PDF image form and browsing through the pages, but preferably more kind of flexible thing like the HTML we have, perhaps marking a few things down, like marking a few paragraphs and things. Perhaps browsing similar authors, browsing the reference section as we I think that is more in line with the core core use case. And as opposed to this kind of view that is still very experimental. And it feels like we’re just recreating Author Reader in webXR invaluable as loan time at this point. Is that really a good use of time at this moment?
Speaker5: Okay, it’s it’s okay.
Frode Hegland: It’s kind of annoying, Adam, because some of these things, I believe, had been decided in the group, first of all, not to deal with libraries, but to deal with proceedings. It’s been over a month. That proceeding is the core thing. And secondly, when I use the M word and I’m sorry that that some people find it offensive, all I’m saying is that we present proceedings in a list and then you can do things with it. That is all I’m saying.
Adam Wern: No. Well, that’s the thing you’ve shown us in the documents and you written down are not that it’s very different things. So it’s a bit unfair to characterize our critique right now as that, that we’re talking about the same thing, the list and the reference list. Nobody challenge that. We challenge the notion of like pulling authors out and making some, some sort of automatic layout that is unknown to a new user in Poznan. And it has a very unclear value proposition compared to other things you can do with your limited time on Earth. So that’s the kind of the.
Frode Hegland: I don’t okay, I find this annoying. I’m going to see you in a week. We will keep this discussion, but.
Speaker5: You know.
Frode Hegland: These meetings are tough. I mean, quite a few times I’ve been sworn at today, you know, Deeney has called me the F word quite a few times. And then there’s a huge discussion around the use the word use map. You and Fabian are both coming in to do amazing stuff in parallel, right? I have been fighting now for a few weeks just to find out when you are breeding a proceedings, what should be on the screen that you can show and that you can hide. I am not at all trying to get the map view of author in there. I can do that on my own. It is the beginning of just saying.
Speaker5: To me.
Frode Hegland: It seems that instead of having lots of little rectangles floating above to be able to build relationship, to see things, to interact with, it has value. If you don’t agree with it, that is totally fine, but I would really like to see more specifically what you mean by that, because to me, you know, I’m working on a 27 inch screen large like you all have, and I can get to a certain point where it is. And by the way, the map view and author doesn’t even work properly. So I’m not in all trying to copy it, but there’s a lot of documents in the proceedings. I find most of them boring. A lot of the ones from last year have nothing to do with hyperwords. Like, excuse me, hypertext. I think it’s geographical stuff. I don’t want to see them, so I want a quick way to just hide those documents. And then also, as someone who doesn’t understand the field from the academic point of view, I would like to interact with the people behind it and whatever we call those view, whether it’s a citation tree that that you’re working on or whatever it is, I think it has has some value. But
Adam Wern: Yeah. I’m not arguing you against the last thing you said, but you said so many things. I think working with the references and perhaps following kind of filtering by people, that’s fine. But what you’re showing is something much more elaborate and also complicated and with a bit unclear value at this point, as you haven’t been able to figure it out in author and reader, as you say. Yet, because.
Frode Hegland: I lost my programmer.
Adam Wern: Yeah, yeah, but but it’s a kind of a crucial point to to kind of investigate that right now when the basics of this long list are not done. It’s really that’s what I’m saying. And it may sound harsh, but it felt like you jumped away from the library and reference section to this mapping view a bit too quickly right now. Okay.
Frode Hegland: So and references I think are overblown. I the one thing I really don’t like about academia is the fact that you can’t say the sky is blue without having a reference to someone who said it before you, right? And I to an extent that is really important. But I also think it is really you know, there’s also the thing in academia that no one invented anything because there was always precedent. Right?
Mark Anderson: I think that’s also not true. Actually. I think you’re overblowing it slightly. It’s just that if you’re going to do something in a peer reviewed environment, there is a general expectation that if you make a contentious statement that you would have something to put behind it. That is not the same as saying you can’t say anything unless it’s cited. I think that’s I think that’s being slightly unfair.
Speaker5: What I’m saying.
Frode Hegland: Is that if you want to get to grips with what’s in documents, there are many things that are relevant to the user at scale. One of them is names, right? We now have the means to extract names from documents, but in my case, Doug Engelbart is a good example of someone refers to Doug. I would probably want to see that article, see if it’s bullshit or interesting new, right? So there are many elements that people can refer to to decide what is good for them. And also in many ways to to read these things. Right. And the only reason I’m talking about a kind of a wide environment like this is quite simply to use the space. Right. And what you said earlier today, Adam, I completely agree with you should be able to have multiple things like this open like one you know, leave the library over there on the left, on the right here, have, you know, outlines of things and so on. I completely accept that. That’s a very good idea. I just also think so.
Adam Wern: But from hearing Mark and and Dean talk about the academic work, they have been more concerned with the ideas in the papers and perhaps the reference trail and and so on. Then actually kind of having people, people browsing or kind of jumping between people, of course, that could be a nice thing to do now and then and watch people. But for more serious academic work, I think the ideas in themselves and the and how much they are supported and the kind of more involved knowledge work is what we’re aiming for here for the academic, or else we will be felt very superficial. And if we come to Poznan and all you can do or the most of the features are about kind of just discovery and surfing different people and following that kind of very light surfing of the, of the, of the surface and not the anything in depth.
Speaker5: Okay, this is.
Frode Hegland: Really annoying, Adam, because I haven’t suggested anything of the sort. Right. I, I don’t understand why this is so contentious other than, you know, you literally haven’t been here.
Speaker5: I don’t think it’s.
Mark Anderson: Contentious, Fred.
Speaker5: I let me.
Frode Hegland: Please let me just, you know, like Dana said, let me speak.
Adam Wern: You don’t need to be offended all the time. For every when Dean says something to you or you’re doing the same thing. The critique is not more against the idea of having the map, not against you. And we’re we’re we’re losing valuable time if you take it as a critique and if it’s the kind of everything needs to be an emotional reaction in you.
Frode Hegland: I have not said anything against people reading documents. What you said now, I do think was a critique, and I think it was a little bit offensive because you said if all we show in Potsdam is superficial, looking at these things, I’m not at all suggesting that this which I’ve been trying to deal with this for quite a few weeks now for a view that is very flexible, call it whatever you will, where you have different ways of looking at what’s inside the papers. That doesn’t no way preclude interactions with the paper at the deeper levels. We just haven’t gotten that far. Even it was only today we even decided what some of these levels are called. So, you know, when you look at what I’m proposing here, please don’t think I’m saying we should do that and ignore everything else.
Adam Wern: No, but what should we start with? Shouldn’t we start with the deep stuff and And what you wait.
Speaker5: With.
Frode Hegland: What you regard as the deep stuff?
Adam Wern: Well, the paper, the knowledge in the papers for sure, because that is a kind of the most involved work we’re dealing with. Like the text and the knowledge within them and and the knowledge behind them. Of course.
Speaker5: I don’t agree.
Frode Hegland: With that because to deal with the the knowledge in the paper, these are not long papers. You can’t just read them, you know. So, so of course you should be able to read them in xr2, but reading them on a nice computer screen or printed out is also really, really good for really reading the abstract and reading more, right? Absolutely. So what I’ve been trying to do is to find out what is the key, because the experience here that we’re looking for is someone putting on the headset academic and feeling they can work in that space. We all agree on that. So the question that becomes in a reasonable amount of time, what impression can we give them of something they cannot do elsewhere? And that is to go through the very proceedings that they’re at, because that’s a lot of data that’s just been dumped on them the very day they put the headset on. They’re not going to have time to go through it.
Speaker5: Yeah, yeah, yeah.
Adam Wern: I think we’re all clear with that. So that’s not nothing new. We’re all on the same page that we start with the preceding and so on. Right? No challenge.
Speaker5: Right.
Frode Hegland: So that’s why it’s. It can be a little confusing. See a little problem that confusing with what to do or. And Adam, from our discussions, I actually don’t see what you object to because all I’m saying is list the things, let the user go through them in useful ways. That can include tapping on, so to speak, a paper and zooming out a bit to read the abstract or to read the whole thing. Of course it can. I just believe in these discussions. We haven’t gotten there. But if we if we don’t get to use this space a little bit, you know, moving things here or there and not just stuff that are explicitly the names of the paper, I think we’re not using the space in a very impressive way, Mark.
Mark Anderson: Okay. I think sort of things got slightly unintentionally at odds. I think the point about the names was they have they have more, more, more relevance at a slightly later point in the process. When you take a brand new set of papers, it doesn’t work. Well, actually, it will be. It will be, frankly, unprofessional to just sort of go straight for the names or something because you’re trying to do you’re trying to look at the knowledge, you’re trying to look at what’s there and the quality of the ideas. It is obviously the fact that there’s someone you regard highly or something that you you will have a view on it, but it’s not helpful to your understanding if you start with an opinionated view saying, well, we like these people, we don’t like those people. So that’s one of the reasons it’s not necessarily for reading new papers, which is, let’s be clear, our, our our, our our, our demo task. It doesn’t mean it’s not worthwhile later. And it was interesting what you what the example you shared earlier about extracting terms. Now that could be useful because that could enrich the amount of terms available when doing the initial triage. My gut feeling from the experience of actually doing this is it probably just it’s going to be very hit and miss, actually. You’ll end up with a lot even more to consume. The thing it can do is it may reveal stuff that or terms that you that you were looking for that weren’t expressed.
Mark Anderson: But that’s an aspiration. I mean, in other words, the thing is, just because we can do it doesn’t mean it’s a it’s a surefire thing. We get something out of it, but it’s something we can certainly try on the side. And we should do that for the hypertext 22, which is the set for which we have PDFs and hypertext, because that’s probably our most useful, most up to date set, sadly. You know, ideally we use 2223, but there’s no hypertext for that. And I think it’s useful to to use a set that does both. So if, you know, if we actually did that and looked at what was there and said, does this actually make usable stuff, I don’t know. This is I mean, I actually would like to see actual output from that, from those 50 papers or whatever the weather in, you know, where in hypertext 22, because that actually is a really tangible test as to whether doing that process at that stage. But the key thing you really trying to do is you’re trying to decompose the document. So all these other things are later stage. They’re interesting. They’re relevant to the use of the XR space, but they’re not relevant to the scenario. And I think that’s where we’ve got derailed.
Speaker5: Okay.
Frode Hegland: I disagree with that. Last point, I agree with the earlier points because, you know, I’ve done a lot of work with reader to make it more easy to read a document, and I think for my style of reading, a lot can be done on a, on a 2D screen for one single document, right? To have it to be able to have it out. I mean, I do want your decomposing thing, Mark. I want that strongly. And I’ve been begging the technical people of our community to be allowed to put some metadata in a document. So when you open it, it does deconstruct, or maybe you have a 3D thing and all of that. I think that is very important. I just haven’t seen any effort on it at all. Right. So please note that what I’m talking about is being in an environment that you have that list, but the things around it, it’s not just the names of people. And I do think the names can be useful. Right. Because first of all, Mark, I want to read you your paper. I want to read the people I know. I want to be able to find the quick know.
Mark Anderson: You shouldn’t know. So that’s I think that’s fundamentally wrong. You should that’s that’s not a good academic approach because you’re privileging people, you know, the people you don’t. And that that is not a good way to approach knowledge, because you’re more likely to value the people you like and know than those you don’t.
Speaker5: So Mark, Mark.
Frode Hegland: Mark, as as of six hours ago, I also have a PhD. Okay. I’m talking about me. If other people want to do their academic knowledge in a different way, absolutely fine. Okay. If someone wants to go through who is let’s say they’re from a large university. Who else is written from that university that is useful to. I’m not saying we should ignore all the other ones. I’m furthermore saying that I am very wary of using AI for this, but I have a notion that in the future you should be able to have like a little ML in the corner that represents something, and then can just point to papers that have something to do with that, agreeing or not. Right. So all I’m really talking about is having a space where you start with a list and there’s all kinds of magical things around it that helps you view that. That should not at all stop you reading it in any way. And the way you read it is going to be arm wrestling with Adam and me, because I think we agree, but not on every detail. Adam.
Adam Wern: Yeah, yeah, well, in principle I agree with that kind of the last thing you described that you could have a kind of specialized LMS doing small tasks, but you also referring and working a lot in your documents, but the mapping and the the extraction, things are taking valuable time from the other kind of design things. Then the discussions. Yeah. Entity extraction is of course not new. It has been done for for decades now. It hasn’t been it hasn’t been proven that useful. As I see it, it’s for some things, but it’s more large volume data processing. If you get a list of 25 different documents like your proceedings, who on earth would like to have a kind of a special list of authors there? You could just skim the list. The authors are should be next to a title, so having them automatically light up in that, in that scenario, which is the scenario we agreed upon having the procedure from a specific year. So it’s 25 documents.
Speaker5: Okay.
Frode Hegland: We’re all low on time. The thing that you have been working on, the Citation View comes into this as well. Right. Because let’s not argue about any particular usefulness. The idea here is to provide a reasonably accessible manner, many different ways people can interact with this corpus. Right. One of them really should be boom, what are they cited? And you go into an atom and Marxist style citation tree that is useful. We agree on that, right?
Adam Wern: Yeah. Yeah.
Speaker5: Yeah.
Adam Wern: So but well to, to to actually critique our own work then the discussions Marc and I have, it’s not very useful just doing that. You have to do something with that. And that’s what I’m pushing for. And I think we should focus on and not talk about kind of the, the, the keywords and the authors. We need to collect the papers you want to read. We need to find a workflow so you can get the printed paper or PDF on your computer or print whatever. So you can actually do the heavy, heavy intellectual work, which is the thing that produces value in the end. If you just do surfing around papers, you’re not getting smarter and it’s wasting time.
Frode Hegland: I know we’re not talking about just doing surfing, but what we’re talking about is development. Okay, this is really frustrating because I do think at core, the three of us here really agree. Okay, I don’t know what this space should be called, but it should allow for superficial and deep reading. It should allow for connections to be seen and manipulated. Right. And also, as you’ll see in a week, Adam, the spatial nature of the vision, for instance, of putting things here, then whatever is very different from what you can do in the quest. And of course, you can’t do that in webXR, but the idea is that you build these specific workspaces or rooms that are exactly what Mark wants or what you want for what I want. Right? One of them can be just dipping your head in a proceedings or something you don’t really care about. But the notion of being able to show and hide, which is why I really like the idea of choosing what elements are visible and hiding them, and doing things that you can do kind of layouts with them will be very important. There will be more than that. But but I think we should probably think about how far we can push this and what we mean, rather than I don’t know what I’m saying. I’m not quite sure.
Mark Anderson: Okay. I’m. I’m going to have to drop. For what it’s worth, I just put a thing in there because it’s there in the chat. But proceedings at the moment are running at about 100, 150, sometimes up to 200 authors. Just, just just so we know. Though part of the problem is there’s a real uptick in multi-author papers, especially on the smaller end. So basically, a bunch of grad students you’ve got you’re now getting 7 or 8 people on a two page paper, which is, you know, actually quite meaningless. It’s probably been written by one person, realistically. So that’s another that’s just another bit of the fluff we have to go through. I have to drop now. Anyway but good to see you all. Okay. Take care. Bye.
Speaker5: See you later. Bye. But I think.
Frode Hegland: You’re supposed to build complementary environments to this as well. But we will continue figuring out.
Adam Wern: Yeah. So. Well, should we drop drop the record and conclude ourselves?
Frode Hegland: So we’ve agreed that the core unit of work is a proceedings, which is a catalog to use your language. And thank you for that term. So through some means it has been opened up into the XR space. Let’s for now ignore what’s outside of that library or opening dialog. Whatever it is, we have a set of proceedings in front of you. They are listed as they would in the proceedings. What are the kind of interactions that you would like to be able to do? To figure out which ones you want to read properly?
Adam Wern: But if the proceedings are like 2530 objects like for hypertext a quite a quite a small proceedings journal thing then it would make sense to just skim skim them in some sense. And with skim I mean. Having a way of getting kind of go going piece by piece. I want to know as it’s just 25 of them, I can just as well browse through all of them. It’s so small. So it just going up and down in some way. If it’s waving or if it’s kind of a button press, but going line by line or or paper by paper, seeing title, abstract, perhaps. Conclusion. And the authors of course, having that kind of specialized data metadata view would make much sense to me to see if it’s and some way of marking things as well. I want to read this or I want to kind of collect this, so a thumb print or a button to say mark this for this session. So you in the end, you have three papers. You out of 25 that you actually want to read more. Now you have a small collection that you want to take, perhaps even to to your print, to your printer, but to your, to your, to your tablet or to your computer. At some point you want to read them, or maybe you just want to read one paper, but you want to skim electronically, you want to skim three papers, and perhaps you want to read one of them. So a way of marking papers from that list to to read more of is an important part. And I would love to see the document in there immediately. Of course. So if you have access to the journal and we assume we have.
Adam Wern: If you want to skim the text immediately to see if your intuition is right, those three papers if if one of those papers are worth reading, but at least you want to skim them to see if they have a vague. First hand, look at them and skim the paper is actually reading the text. If it’s a long scroll or if it’s a kind of PDF papers. I don’t think that it’s so important at this point because you just want to readable text. So a large PDF for a beautifully rendered scroll in some way. It and I guess at that point you may want to do some margin markings as well, maybe not even text extractions, but just mark a few places in the margins, either with a interesting quote or in some sort of point that you want to ponder or use in your own essay. I think for Posten, we said that. The one of the use cases, at least, is that someone is writing a paper. So let’s see, let’s think let’s assume that you read or skim the 25 papers in the proceedings to see if there is something new on their particular topic that they are writing a paper on. And perhaps in one paper there is one. Interesting. Thing to say. I want to mark that in the margin of that paper so I can have it. Okay. The things you do in a reference manager, of course, you do that kind of markup in there, but it’s a very sterile environment and most people don’t love their the use it, but they don’t like it. So that that is a workflow that goes with that. What we said for Poznan I think.
Frode Hegland: I agree with all of that. I’m just going to briefly share my screen here. So
Speaker5: In
Frode Hegland: Reader, you have the ability to do colors, right?
Speaker5: I.
Frode Hegland: Think it might be useful to not do colors but to do. Interesting. A good facts. This is wrong. You know, to make these semantic.
Speaker5: So
Frode Hegland: Whether that’s done in 2D or 3D, I don’t care about. I am completely in agreement with everything you said. The only reason I’m talking about the contentious map view is over time, you build yourself a way you can get a better sculpture of what this is. So that what I found was interesting in my own testing in just author, which is very Sub-par is just having extracted names of people that I care about that are important in the industry. By the way, brochure, whatever. Right. Put them in a column somewhere. So if I select them, I can see who was written about them. That has real value to me.
Speaker5: Right.
Frode Hegland: There’s other information that should be in the metadata that isn’t such as which won won best paper which got the newcomer award. All of these things. Is there somebody who’s won a newcomer award before? Because academia is very much about the people, not just the papers. We cite the papers, but we are the trust or like the academics or not. You know, Mark and Danny are right in what they say in an ideal sense. But also there will be people they, like any human, just do not like. They don’t trust them. Right? That may not be in the paper. So that has actual value. The point of this map space really should be initially. Get rid of the papers that are relevant, like last year at hypertext. A lot of it was about geography. No impact on my work. Nothing wrong with it, but no impact. Hide it.
Speaker5: This second.
Adam Wern: So let’s say if we. I’m just taking your idea. It’s a small interaction tangent then. So let’s say we have the that you marked five people that you want to follow. Or for follow or know if there are mentioned and and you made the list somewhere that you have some action to add them, add them to your list. So when you browse through that list of 25 preceding documents apart from the metadata, the kind of formal metadata and the abstract and perhaps summary and that kind of metadata stuff and text On the side to that document. It also lights up those not as an A list on the wall or somewhere else, but actually the it gets kind of attached to that document. So it’s a text on the side to it. That lights up or it says mentions Douglas Engelbart. Engelbart or mentions fraud or that. You see, whenever someone is so attached to the document, document metadata view is that would that work?
Speaker5: It’s.
Frode Hegland: Interesting because I would say yes and no. What I would say no to is in the experiments I’ve been doing, it can very easily get cluttered. It’s nice to have just the titles. That’s one concern. But the positive side of this is
Speaker5: You should be.
Frode Hegland: Able to have more shapes. Text is awful because text doesn’t have good shape at scale, right? So to be able to have a dog head and all of these people shape of a document that let’s say we all agree that having selecting something and having lines is really suboptimal, right? It’s just not the best. So even in the map in author, I want it to be select things and the things highlights because you can’t see the lines. But but I can imagine that you say or even speak to a list and you say, show me all the ones about the pioneers that they have or make them. Sorry, I’m talking in circles. What I mean is. Okay, so my father liked to work with fax documents because faxes got brown over time. So of course some computer systems have used that. You know, you have a brown older document, but it became a tangible metadata thing you could just look at. So we can attach that to other metadata like these are to do with AI, these are to do with these people and so on. That is the kind of thing we need to do. Right. And the notion of saving workspaces, I think is really important because once you’ve done one proceedings, all this extra stuff you’ve done, you don’t want to go and do this manually every time.
Speaker5: You want to be able to.
Adam Wern: You know, so you have some sort of template space in a way that the Ikea for the Ikea apartment that you do. Well, not really, but you do your template for space, for space, and you can perhaps clone it and even change the template.
Frode Hegland: I’m trying to avoid the term template because.
Adam Wern: Yeah, yeah, it’s not fun. It’s kind of kills you when you hear it.
Speaker5: Yeah, but but.
Frode Hegland: I’m thinking exactly that. Which is why I’m happy to go with Denise. Arrangements. I don’t think that’s. That’s bad at all, but it’s just.
Speaker5: Know what I find difficult?
Frode Hegland: Talking back to Danny and Mark. And yes, I know we’re on the record. They are very good at saying what things should be in general and what it shouldn’t be. Now we’re at the point where I get frustrated because now we have to find out some initial things that are just has to be.
Speaker5: Right that most of.
Frode Hegland: What Andrew has done, he’s done it on his own back, interpreting our meetings. I think he’s done a tremendous job that we need to be more prescriptive now and say, this happens with this, and this is viewable by that. And that is really hard to get in our wider community. That’s why we need design discussions where everyone’s invited, but not necessarily for every topic.
Adam Wern: Yep it’s. Yep. Yeah I agree I.
Adam Wern: And the challenge here is to to be focused enough as well. So we don’t talk about everything in these design meetings have to kind of not talk about creating papers, not talking about the our own other prototypes unless they are really hyper relevant. So we stay on on topic. That is really one challenge, but we are better at that now than before, I think slightly better. Not perfect, but slightly better.
Frode Hegland: I mean, the papers have been a bit annoying because they have a specific deadline around them.
Speaker5: So yeah, you know. But okay. So okay.
Frode Hegland: So what we’re going to talk about when you are here is probably going to be at least partly exactly this.
Speaker5: All right. So we’re probably going.
Frode Hegland: To walk around the room and look at walls and do all kinds of stuff. And some of it will record. Obviously most of us just, you know, people hanging out. But you know, I have a 360 camera now that’s a bit better than the old one. So I can imagine we have a design session where.
Speaker5: Blah blah blah, blah, blah, blah. Right.
Frode Hegland: And Fabian may very well start writing things on card as he does.
Frode Hegland: That’s useful.
Speaker5: Too. Yeah.
Adam Wern: That’s in a few days. And we’ll meet Monday Monday morning for something. So Yeah. I understood, understood. Today it sounded a bit different. But Fabian, isn’t Fabian coming to that meeting?
Speaker5: Yes, yes.
Adam Wern: So it’s not necessary. You said it was about my design, but there. But I think perhaps we do it about the whole design. Kind of
Speaker5: On the Monday.
Frode Hegland: Meeting.
Speaker5: Yeah, well, Monday.
Adam Wern: Morning, the kind of European design meeting. Should we do that? About the continue what we’ve done now in with Mark, you, me and Fabian to just to see if we can get the workflow a bit better here to see where our prototypes potentially fit in, or if they are side by side or if they are very modal in that you go into one mode Fabian mode. Adam mode or Andrew mode or Frodo mode.
Speaker5: I agree.
Frode Hegland: I’ll send out a message. Well, I’m going to use your brain for one last quick thing.
Speaker5: If.
Frode Hegland: I change the Annotation thing to be not colors, but. Semantic values. I’m going to save the entire world now. But. A problem has been with the Mark Anderson. They went through a one of my documents, one of the PDFs, and absolutely mangled it, as they should. But the point was, there were a lot of things all over the place. So what we talked about what about? Interesting issue. Agree wrong. Whatever. What kind of things might we want to have on?
Adam Wern: So there are two types of comments. One are for yourself. When you read a paper and you write interesting, you know what you mean by interesting. And the other is kind of a note for someone else. And then you need you almost, almost always need some sort of free form kind of spoken or written text. But the saying what you mean by interesting, even though it’s marked interesting, it’s not enough to say you need to know what is interesting in this or something more. So so I like the idea of having kind of these types of tags or whatever you put on the a limited vocabulary. It could be pre-populated, but also something you could define yourself. You get four of them in the beginning, but you could easily change it if you want to have a a new one. So a new color and a new word. You may not even need colors. If you don’t like colors, you don’t add a color. You just have words then, but it could be colored.
Speaker5: Because I’m thinking, no, no. Go on.
Adam Wern: So just to be clear for now, were you a moment ago, we talked about reading for for yourself. And now this is kind of for others. So which one should we. Which one are we talking about primarily here.
Speaker5: Well, so here’s the thing.
Frode Hegland: This covers probably what some students want. They just want colors. Yep. You know, fair enough. And you’re absolutely right. One of the interesting that’s probably for yourself.
Speaker5: Issue, maybe for yourself.
Frode Hegland: If you come across an academic paper and you think something is nonsense. But these two are probably when you’re reading on behalf of someone else.
Speaker5: Right.
Frode Hegland: So we we agree on that. So Let’s say the keyboard shortcut for interesting. We keep the other ones. We also add AI for interesting, and then we can do a for agree, which is in green because people can also do.
Speaker5: Ag and AG.
Frode Hegland: They wouldn’t overlap, but for the issue or wrong or red.
Speaker5: What should we call that? No.
Adam Wern: The worry, I don’t know. I think this whole menu should be user definable. It’s my first thing, but we’re having a good word from the start. Is is of course good. I’m thinking while you’re doing this, I’m thinking about voice annotation. We have, at least on the vision. We have voice, the ability to both speak and record the thing and transcribe it live if you want to. If you just point to a section and click interesting and say Y, it can transcribe your voice. So you have. Yeah, potentially both. If you want to keep the voice recording, you have it, but at least the transcribed.
Speaker5: So.
Adam Wern: You don’t have to write anything, but you can speak. That would at least showcase XR as as in the direction we’re going. Voice interactable. Lm capable thing. So I think we should should at least keep that in mind that we could say. Select something and say interesting or and it marks it immediately.
Speaker5: Oh, yeah.
Frode Hegland: No, I do agree with that.
Speaker5: I think this is all we need, actually. By the way.
Frode Hegland: You know, we keep the normal colors. Of course, they should be user definable. But as you also said, of course we need to have something.
Adam Wern: Yeah. You start. Start with that. You can remove them if you really don’t like them. But they should transfer from from the flatland. We have what the reader into XR.
Speaker5: Yeah. And back.
Adam Wern: That’s very important because then you have done something. My main use case for kind of being an XR is creating that small list, the collection that you will read later. If you don’t do that, you haven’t and just move a positions. You may not have done so much work. You do a memory palace and some sort of sculpture, but one of the most important and basic use cases in creating the list that you take back, I think.
Speaker5: Yeah, I.
Frode Hegland: Completely agree with you. The The list is absolutely crucial.
Adam Wern: Yeah. And the list with this as well, in kind of highlighted text is actually extra interesting that you have a few documents that you have also added a few few markers to.
Adam Wern: Yeah. I need to go now. I’ve been and eat a bit, but let’s continue this offline and on Monday, so hope Fabian can be there and Mark as well, because then then we can kind of do a pre. Free future text. London social. Yeah. Some good design work. So yeah, we’re on record now, but And.
Chat Log:
16:04:16 From Peter Wasilko : Noshing in New York.
16:04:27 From Frode Hegland : https://public.3.basecamp.com/p/3LRAZ1LTHaBQ9ujGrYQyQhGt
16:08:21 From Peter Wasilko : Can reader invoke a local LLM? Ollama serves from port 127.0.0.1:11434
16:17:51 From Dene Grigar : you have always been peers. Having a PhD does not affect relationships.
16:18:20 From Mark Anderson : Reacted to “you have always been…” with š
16:18:56 From Peter Wasilko : Or having a J.D., LL.M.
16:19:55 From Dene Grigar : right. I do not like the elitism of titles
16:20:14 From Dene Grigar : We should respect the knowledge base, certainly
16:20:34 From Peter Wasilko : Reacted to “We should respect th…” with š
16:23:37 From Peter Wasilko : RE: Hallucinations, here is an prime example asking llama3 about Mark Bernstein (which suggests the need for a hallucination focused alternative to the Turing Test ā I call it The Bernstein Hallucination Metric) :
16:26:17 From Frode Hegland : Ismail Serageldin
16:27:02 From Frode Hegland : OK, maybe end with a sentence in Ack: We would also like to thank Vint Cerf and Ismail Serageldin for their continued support over the years.
16:29:06 From Frode Hegland : https://futuretextlab.info/current-testing/
16:30:29 From Dene Grigar : I am not able to say this, but I just recompiled the doc
16:30:45 From Frode Hegland : Cannot recompile?
16:36:53 From Peter Wasilko : āDetachable Visual Metaā is my working title.
16:39:28 From Mark Anderson : The high-res paper is currently 9-10 pages, noting the the allowed length is 6 pages (in 2-col). references are extra.
16:40:26 From Mark Anderson : Iām not saying lose the rectangular screen. Iām pointing up the fact that it isnāt the only choice. slightly different š
16:41:11 From Frode Hegland : Reacted to “Iām not saying lose …” with š„
16:41:18 From Dene Grigar : co-authoring a paper is not like commenting on one
16:43:08 From Dene Grigar : okay.
16:43:27 From Dene Grigar : I had just thought you were leading the team paper.
16:45:29 From Dene Grigar : This means you are writing your HUMAN paper
16:47:33 From Mark Anderson : Iām fully (over!) committed. Happy to help with LaTeX/Overleaf for any ongoing.
16:50:12 From Peter Wasilko : I can use all of the publication credits I can rack up.
16:51:42 From Peter Wasilko : The card sized chunk model can also be traced to Charles H. Mooreās Forth programming language which had a āDictionaryā whose definitions were built up from screens holding 16 lines of text each.
16:52:19 From Dene Grigar : Peter, I am happy to comment on your paper and help you get it submitted
16:52:30 From Mark Anderson : Really liked Andrewās video for today š
16:52:32 From Dene Grigar : What will be your working title
16:52:46 From Peter Wasilko : Replying to “Peter, I am happy to…”
Much appreciated!
16:53:50 From Frode Hegland : “Detachable Visual-Meta” I think
16:54:18 From Peter Wasilko : Replying to “Peter, I am happy to…”
And I would be honed to have my name included in the group submissions.
16:54:26 From Peter Wasilko : Replying to “Peter, I am happy to…”
*honored
16:54:44 From Peter Wasilko : Reacted to “”Detachable Visual-M…” with š
16:57:26 From Peter Wasilko : Also this coming Monday is a US Holiday, so Iāll be taking Mum out for the day.
16:58:29 From Mark Anderson : An interesting provocation here in terms of how many āviewsā are in the overall view (environment)āand I done think I have a firm opinion. Itās just nice to see this exploration.
17:04:35 From Mark Anderson : In @D we have to do āon the sideā as tab and drawers. In the holodeck of XR, I think the open/close could be to collapse the, .e.g library, back onto a point/object.
17:04:47 From Peter Wasilko : Raskinās quasimodes might prove useful.
17:06:09 From Peter Wasilko : http://www.canoncat.net/the/Manual%2042.txt
17:11:08 From Dene Grigar : I want to play Scrabble in Virtual Space
17:11:09 From Peter Wasilko : An axis constrained spin-able cube could be interesting. (ie. Rotate left, right, up, or down snapping to the nearest face facing the viewer with nice square edges so it can go a few degrees off axis by accident)
17:12:07 From Peter Wasilko : Just in sidebar text-interaction mode today.
17:13:10 From Dene Grigar : I used to play Scrabble in the MOO back in the day. It is odd that we could so easily do this in the mid-1990s but not so much today with all of the upgrades and new tech we have
17:15:54 From Mark Anderson : Ofc, you might want your game of Scrabble in a window within your XRāas you can work still when it is not your game term. So the Scrabble board is in a āscaffoldā (I take that a fixed window) but it doesnāt have to be the whole environment, unless that is our choice.
17:16:08 From Fabien Benetou : for ref video of the shown demo https://x.com/utopiah/status/1793306026952856018 where I drag&drop a PDF and get it in AR without reloading a page
17:16:29 From Fabien Benetou : 3D Scrabble in XR
17:16:57 From Mark Anderson : Reacted to “for ref video of the…” with š
17:17:22 From Dene Grigar : Replying to “3D Scrabble in XR”
of course. that would be ideal
17:17:48 From Fabien Benetou : Replying to “3D Scrabble in XR”
honestly as long as we let the players decide if the words are valid, would be trivial to implement
17:17:51 From Dene Grigar : Replying to “3D Scrabble in XR”
That gives me an idea for the capstone project for my students next fall
17:18:02 From Peter Wasilko : https://www.reddit.com/r/VintageApple/comments/x9iq6j/macintoshs_first_app_switcher_switcher_by_andy/
17:18:26 From Dene Grigar : Replying to “3D Scrabble in XR”
Most of them learn how to do 3D modeling and have done a major project in that course modeling and animating a chess game
17:18:40 From Mark Anderson : Replying to “3D Scrabble in XR”
As weāre living in the Future, would that be Kal-Toh (the Vulcan strategy game?)
17:18:48 From Dene Grigar : Replying to “3D Scrabble in XR”
Moving that into VR would be an interesting exploration for them
17:19:07 From Dene Grigar : Replying to “3D Scrabble in XR”
I have several high level programmers in that class, which helps
17:19:30 From Dene Grigar : Reacted to “As weāre living in t…” with š
17:20:11 From Mark Anderson : Reacted to “I have several high …” with š
17:20:13 From Fabien Benetou : Replying to “3D Scrabble in XR”
I’d build on top of https://x.com/utopiah/status/1780630414488465498 where each cube would be a letter, each player having their randomly picked assigned letters
17:20:25 From Dene Grigar : I caution us from making Andrew redo what he has already built and instead build on that work
17:20:55 From Dene Grigar : Replying to “https://www.reddit.c…”
thanks
17:21:32 From Mark Anderson : Are environments nestable/collapsible or modal?
17:21:56 From Fabien Benetou : Replying to “3D Scrabble in XR”
FWIW https://a.fsdn.com/con/app/proj/scrabble/screenshots/240954.jpg/max/max/1 from https://sourceforge.net/projects/scrabble/
17:22:09 From Dene Grigar : Replying to “https://www.reddit.c…”
I prefer elements
17:22:15 From Dene Grigar : I prefer elements
17:22:29 From Fabien Benetou : Replying to “3D Scrabble in XR”
which interestingly, arguably shows how terrible it is projected in 2D
17:23:39 From Fabien Benetou : it’s elements in the DOM too
17:25:12 From Dene Grigar : Reacted to “it’s elements in the…” with š
17:26:03 From Peter Wasilko : Replying to “https://www.reddit.c…”
Elements?
17:27:25 From Fabien Benetou : Replying to “for ref video of t…”
oops https://x.com/utopiah/status/1793314676220088655
17:29:05 From Dene Grigar : arrange is a verb
17:29:20 From Dene Grigar : If we are using it to describe an object, then call it arrangement
17:29:47 From Dene Grigar : this term corresponds to the canon of rhetoric, āarrangement”
17:30:07 From Andrew Thompson : Minor nuance, but the elbow isn’t tracked in XR. (You can create an IK rig but that might be overcomplicating things)
17:30:33 From Frode Hegland : Ah ok..
17:30:35 From Dene Grigar : Reacted to “Minor nuance, but th…” with š
17:30:43 From Frode Hegland : Major nuance!
17:31:26 From Andrew Thompson : I feel like your concept still works, perhaps just moving the second sphere closer to the wrist of making it the palm.
17:31:47 From Andrew Thompson : Though multiple spheres might start getting confusing. Worth discussing.
17:35:25 From Peter Wasilko : This discussion thread has some interesting spatial ideas in it: https://news.ycombinator.com/item?id=38141173
17:36:18 From Peter Wasilko : And from the linked article:
17:36:54 From Fabien Benetou : Replying to “This discussion th…”
Yes even though it’s limited it’s interesting because window managers, like KDE, are already used to manipulate abstractions, e.g file system, file explorer, workspaces, etc
17:37:30 From Fabien Benetou : Replying to “Screenshot2024_05_…”
which isn’t 3D btw, it’s 3D for rendering but not for manipulation, where it’s a 1D torus iirc
17:40:39 From Peter Wasilko : Replying to “Screenshot2024_05_22_123606.jpg”
But it might work in real 3-D concept-wise.
17:42:06 From Peter Wasilko : Replying to “Screenshot2024_05_22_123606.jpg”
Was it Switcher or Servant that had the faux-3d cube rotation on early black and white Macs?
17:42:16 From Mark Anderson : Itās a form of spatial hypertext!
17:43:41 From Dene Grigar : Reacted to “Itās a form of spati…” with š
17:47:49 From Dene Grigar : Andrew, let Holly know I will be late
17:48:49 From Dene Grigar : good, thanks
17:49:15 From Andrew Thompson : Reacted to “Andrew, let Holly kn…” with š
17:50:33 From Dene Grigar : In data visualization it means something different than what we are talking about
17:53:06 From Fabien Benetou : just tilted my head because “n word” is triggering for quite a few people
17:55:13 From Mark Anderson : If it helps, I think the library ālistā is simply the serving suggestionāthe opening arrangement. IOW, it isnāt a āviewā just a reasonably accessible starting layout of objects.
17:55:14 From Andrew Thompson : I need to head out as well, see you all next week.
17:59:12 From Mark Anderson : Iām going to have to drop off soon
18:08:39 From Fabien Benetou : Have to go unfortunately, take care everyone
18:10:03 From Peter Wasilko : I am out of time, good luck with the paper writing.
18:17:54 From Mark Anderson : FWIW, a proceedings is c.100-150 people, though many-author papers are growing (making name recognition harder).