6 May 2024

Dene Grigar: Good morning.

Frode Hegland: Good morning, young lady. How are you?

Dene Grigar: I’m good.

Frode Hegland: That’s good.

Dene Grigar: I got your message. Yes. It was already on the agenda to get Holly to get the invitation done. I also think we need a we need a a particular a specific web page set up for the symposium that includes hotels, restaurants, transportation, all the things that I’ve done for other conferences. And I’ll put that together and you can just link to it. We’ll call it getting there, getting there and staying in accommodations.

Frode Hegland: Yeah. Local information and stuff like that. Absolutely. Hello, Andrew.

Dene Grigar: Good to see you this morning. Can’t write contracts.

Frode Hegland: Yeah. Cool.

Dene Grigar: So they need to have it in files.

Frode Hegland: Exactly. So when you’re talking about invitation for the symposium do you mean separate to the email and separate to the page, or do you mean that.

Dene Grigar: Really nice card that looks like an announcement card that will send out so people can take that? It’s not just an email, something they can actually put on their desktop.

Frode Hegland: Okay.

Dene Grigar: I do this for everything that we have, you know, events for. And it’s useful. Contains all the information. The one thing I should ask, do we decide not to have zoom? Are we going to zoom a lot for people coming in by zoom?

Frode Hegland: Nobody can present, but people can watch. That’s okay. Do you agree?

Dene Grigar: And I’ll make it soon. Yeah, absolutely. Do you want to set up the zoom link for me? Is this going to be the same one as we’re usually using?

Frode Hegland: I think this one. Unless there’s a reason to change it.

Speaker3: Not looking for.

Dene Grigar: No problem.

Frode Hegland: So is everybody. I know that Mark is not going to make it so. Yeah. Right. Okay. So what? I’ve been talking to friends about over the last few days. I think we’ve talked about it, too. Just just to go through is. But also DNA. What I’m talking about now, it’s probably going to be my blue sky submission. So we need to coordinate who writes what for what topic. So we don’t have different papers that stand on each other’s feet, obviously.

Dene Grigar: Okay, so I’m doing a blue skies for the team. What are you doing? You’re doing? You’re doing a human workshop.

Frode Hegland: Yeah. But. Yeah. So. Well, I don’t know who does what, and I don’t care as long as we’re all happy. But what came up in the meeting with you and Klaus the other day is pretty much what I think the whatever thing that I’m writing will be. And it’s for the same work, but we have to decide what aspect the different papers are covering.

Dene Grigar: Okay, so I already so what we talked about with class and we better get this organized because I’ve already started on the on the blue sky paper for the team. And I put that into the chat yesterday. So you’re doing the human workshop keynote slash paper. Then the team is doing the blue skies and you are going to be with me. Mark. And Peter has also volunteered. I haven’t heard from anybody else. So there’s four of us on that paper. So your your human workshop is just you, and that’s a paper. The blue Sky, as we talked about with cloud with Klaus is the team. And it’s four of us so far. And I have already sent out a note about that. And I put in here. I’ll put in that note what we’re talking about. So if you just look at the. Hi, Adam. Good to see you again. Hey, Leon. So if you look at the slack channel. We’re talking about the blue sky paper. And the paper that we’re talking about for the presentation.

Frode Hegland: Right. Yeah.

Dene Grigar: So just to just to refresh everybody’s memory, when you go to the hypertext conference, there’s not one thing you do that doesn’t have a written paper. It goes in the proceedings even if you do a demo. And it’s just showing things. And you have to write a formal paper in LaTeX style, right? So when you say a paper, you’re also talking about a presentation or a demo or something. Frodo has been invited to keynote the human workshop, which is great.

Frode Hegland: Not keynote keynote. I did a keynote last year. I’m doing a normal paper this year.

Speaker3: Okay.

Dene Grigar: He’s going to be submitting to the human workshop. That’s great. And then the team is going to be doing a blue sky paper. And yesterday I worked on the beginning of it, and I put into our chat in the slack channel a little bit about it. Right. So I gave you a link to it. The blue sky paper is some people may never have written a blue sky, so I gave you a definition of it. And I’m using Overleaf generally, but it costs money every month. So we’re going to be using Google Doc, and then I’ll put it in Overleaf for the final version.

Frode Hegland: I think Mark was suggesting Overleaf as well. I think he may have an account, but

Dene Grigar: Better to because it cost me like $20 a month.

Speaker3: Okay.

Dene Grigar: But it’s something that I don’t think everybody should have. So Mark and I can get into final form. We just use a Google Docs. So and I see that when I did post that Mark said he’s had he has an account. That’s fine. And then Peter wants to be on it. He’s the only those are the only two people that have said they wanted to do it, besides Froda and me. So if you want to be part of the paper, let me know.

Frode Hegland: So what? I’ve opened up your Blue Skies presentation document. Do you have an outline or a topic for the blue skies?

Speaker3: Not yet. Right. Okay.

Frode Hegland: So I thought we’d.

Dene Grigar: Talk about it today. What does people want? What do people want to write about?

Frode Hegland: Yeah. Now, that’s exactly the point. So anybody can submit a paper. Really. And, you know, in this community, we should just coordinate to see which papers we are collaborating on. And if we’re writing separate papers, make sure they don’t step on each other’s toes, so to speak, considering we’re probably writing on the same topic, but it connects, which would be good.

Speaker3: By Peter. Hi, Peter. Hi, everybody. How are you doing, everybody?

Frode Hegland: Yeah. Good.

Speaker3: Good, good, good.

Frode Hegland: Just want to check something.

Peter Wasilko: Okay, so you don’t hear me chewing.

Frode Hegland: I was a good thing. So, yeah. The paper that I want to submit for the human workshop is on doing outlines. And the point of that is. So I’m just trying to open up my document here. And there was something weird earlier. Sorry. One second. Right. So the. No. Here we go. Okay, so as I’ve talked to randomly with with some of you. We’ve looked at libraries and worked on libraries. We worked on, you know, various ideas of what a library means. Is that a catalog, a reference section, is it an actual library? So we’ve also looked at documents. And one thing that really came to mind for me last week was the idea of a conference proceedings or journals or a book like our book. In other words, I rob. In other words, a document which is produced by many people. And that seems to be a very, very interesting data set rather than an entire library, because an entire library people have put their own documents in, so they have some idea, you know, what’s going on. So what I think will be very exciting for us to focus on with the main software that Andrew is building, is someone puts on the headset and they’re presenting with the proceedings of this year’s hypertext conference. So the question is then how they can view it. So what I’ll do now, if you don’t mind. I will show you something. I can see my screen, right?

Speaker3: Yes! Yeah.

Frode Hegland: Scream if you can’t. So what I’ve been doing over the last few days, I’ve taken all the documents in last year’s proceedings and copy them over to a new document. So if you look at this, you’ll see that these headings, each heading is one.

Speaker3: Paper.

Frode Hegland: Just going to make this a bit wider. The way it’s listed. And don’t forget, the proceedings are often not released as one document. Very often it’s a collection of individual articles. Last year we were lucky we did have it as one document. However, there is no table of contents and there is no outline. If you open it up in Acrobat or Preview or whatever. So if we look at this. Here is two celebrities hypertextuality and virtual reality. The article itself I have defined. By defined, I mean I have written the two contributors, Denny and Rob. They are in bold meaning. They are also defined. The summary names mentioned and keywords are I extracted from that paper. It’s not torment. It’s not meant to be any kind of pure I extraction. So in many cases I’ve gone in and tweak things a little bit and edited right. So the reason I think this is relevant to us will become more apparent in a minute. But here’s the first sentence of the abstract, followed by a link. So there’s two things to show you. This is the first. Some of you have seen something like this before. So here is a document a PDF. And as you saw before, this title author name first sentence of the abstract. But if you click on the citation. You get the second sentence, meaning you get a bit more. If you now click on the title of the document, it opens the original document that has all the proceedings to the right page.

Speaker3: I think that’s kind.

Frode Hegland: Of cool, but that’s not really where we’re going. What’s kind of exciting to me is if you click on the map view at the bottom here. You get something like this? So here we have there’s something very strange that’s happened with this document. It doesn’t do all the lines. But if I select all the papers, these are all the papers here in the middle, you can see how they’re connected to the people. And so on. See this? So my point here really is. This is cool, I think to an extent, but it would be much better in a XR environment because you need more space. The use case for this is you’re going to conference. You get all these 30 odd papers and, you know, put in your lap. This is a way, hopefully, that you can have a better understanding of which papers you want to read. There are some design things that we have to do here. They It’s not always easy to see where things connect and all of that stuff. And, Adam, you saw this working earlier, but for now. Now, for some reason, if I click on the countries, it doesn’t do anything. Here’s a detail you can see. These are the people that I work with the most, so we’re all in a special little bunch put up like this. So what I’m proposing is that for our main work. We use the current conference proceedings as the core unit of work. We incorporate what Mark and Adam has worked on before, to have every single paper that’s been in hypertext available in a view. But the initial use case is to have this wide thinking space where the user with really interesting controls, such as tapping on the sphere and using Andrew’s display panel, choosing what to see so that, for instance, Dean does, a user can then use full environment, someone else might want to use columns or whatever, and these different views can be saved.

Dene Grigar: So is this your personal paper? I mean the one that I’m the one that I’m putting forward. And I did mention a title or at least a theme and that’s XR and spatial hypertext. And I suggested for readings. That’s the one I’m imagining for the team that we have to do for Sloan. So we’re actually doing maybe three papers your human, my human, and then the team paper, which is our Sloan paper. So which one are you talking about? Is that your personal human workshop paper?

Frode Hegland: I’m not sure exactly what will go in my human paper, but the system ideas that I just showed the intention is that this will be our focus for the Andrew Tompson’s webXR experience.

Dene Grigar: So the demo.

Speaker3: Yeah.

Frode Hegland: The demo and then decide what papers will come out of that because. You know, we’ve spent a lot of time with interactions, which has been very, very fruitful. And we’ve had basically a reference section views so far, but we’ve started experimenting with how things connect. So this is kind of an extension of that for a very clear use case. All academics got to go through proceedings, decide what to read and what to throw away and all of that stuff. So this is looking at four papers.

Speaker3: Four papers, and.

Frode Hegland: This is more about the work rather than the papers at this point.

Dene Grigar: Well, the work has to have a paper. I mean that’s a that’s a problem. I mean, the thing that you just showed for Rob and I, we were so surprised because we were just doing a demo because the work hadn’t been finished yet. We’re like, let’s just show you where we are right now. And then we had to write a paper about it. Remember that? Rob? We’re like, what?

Frode Hegland: No, I completely I completely understand that. Absolutely. But what aspects of it will be what paper by what group of people I think is obviously very, very important to discuss and agree on. But right now, proposing to the group that we expand our work to focus on proceedings rather than a reference section or rather than PDFs, free standing.

Dene Grigar: So let me suggest this. You know, you have really great ideas and you got a whole direction you want to go. I would like to write one on XR and spatial hypertext. And so that could be my human or blue sky paper. And anyone that wants to join me on that. I did post four articles that came out of ACM over the last 30 years and that deal with some aspect of this topic. If you want to join me, fine. If you don’t, I’ll do it on my own. But it’s certainly there and it’s in the slack channel. Just let me know if you want to write with me or not. Okay? And the rest of it, Frodo will lead. I’ll let it go like that.

Frode Hegland: Yeah. So we we have to decide on the papers. It is it is really important. And I’m glad we are on track not to start sending out the invitations, which is equally important. But just to take a bit of a poll in the community here on where Andrew’s focus should be, what is the feeling of focusing on helping a user deal with a proceedings or a journal as they the core atomic unit? Something I’m extremely excited about. Hints.

Dene Grigar: I just imagined that what we were going to do with Andrew is set up our system and let him show where we are at the moment. Like, here’s everything we’ve done so far and here’s where we’re headed. That’s what I was imagining we were going to be doing. Not a Not necessarily a structured argument of any sort, just a just a procedure. Right. And then we write that up. Here’s what we you know, here’s what the Sloan Foundation grant was expected to turn out. Here’s what we’ve done thus far. Here’s the next steps. We’re now nine months into it by then, you know? So here’s where we are. We’ve got a year and a half left, a year and three months left. And I thought that’s what we were going to be doing for the demo.

Speaker3: Yeah. Yeah, absolutely.

Dene Grigar: Leon. Question.

Speaker3: Nothing. So.

Dene Grigar: Lay a great question. There is no difference. The proceeding is the journal.

Frode Hegland: So so this is something that Mark also pointed out when I shared some of the initial comments. A journal or companion book, like The Future of Text or Proceedings structurally are the same. They are a collection of articles by different people. The key thing about a proceedings is that they are specifically from a specific conference.

Speaker3: A journal could.

Frode Hegland: Come out, let’s say, 3 or 4 times a year and people may or may not meet. But structurally for what we’re talking about, it’s the same thing. And what I find very exciting here is twofold. The opportunity one is. The metadata to tie together the individual articles, which may or may not be distributed in one binding PDF. That doesn’t always happen, and also the metadata for the spatial layout, which people may or may not do differently. And then the the research to, you know, on the 27 inch screen. This is quite nice, but it’s quite clear we need more space. So to use what Andrew has built and is building to have a huge thing and you know, you can. I talked a little bit earlier with Adam today. We talked about having this fair that we have to contain different layouts so to speak, different workspaces. So that would be something to research. What does that mean? How do you go from one space to another. How do they connect with the core idea is someone puts on the headset and all they’re told is, you know, have a look around and what’s presented in front of them looks initially relatively linear. Again, up for discussion, but it is the papers from the current conference that they are at, so they are clearly interested. And then they are explained how they can see it in relation to earlier papers, in relation to people, in terms of keywords, all of these things. So the key aspect here is they can decide which papers are interesting for them. So they can discard some papers, highlight some papers, add some papers to the library. They can highlight some people that they may consider either historically interesting or as colleagues, and some people that they might find doing very different work. They can kind of hide them. All of these interactions then become possible. But the key thing is, and I’m so excited, so I’m sorry if I’m repeating myself, but instead of being an entire library of just one document, it is one collection of documents. I think that’s super exciting.

Dene Grigar: I just wrote into the chat that proceedings or papers that are presented in an academic conference, the ACM proceedings are peer reviewed and considered top tier. So if you’re in the in those proceedings, that’s considered you know, there’s like tiers like this is a really great one. This is okay, this is not so great. And there’s generally four tiers. And this is a number one tier. A journal article is just published in a journal and the journals are also ranked. So if you’re in something like Ever which is electronic book review or what’s another good one?

Dene Grigar: Oh, there’s just lots of great journals then that’s a top tier journal paper and they’re published like maybe twice a year. Journals come out sometimes four times a year. They’re not they’re not, you know every month, generally academic journals proceedings are generally once a year. And then a chapter in a book is what we’ve been doing with Frodo for all these years. The book that we’re doing with Frodo is not peer reviewed. So it’s when it’s not peer reviewed in academics, it doesn’t count as much as something that is peer reviewed. And so I’d love to be able to peer review maybe year two book, because some of us that are academic could use it. Anyway. So yeah, they’re peer reviewed. Non-peer-reviewed.

Speaker3: Right.

Frode Hegland: So so one thing I need to have to hear about in the And the group is.

Speaker3: Who.

Frode Hegland: Likes the idea of using the proceedings as our core data corpus to do the interactions around.

Dene Grigar: But that’s what we told Klaus on Friday or whenever it was we met with him that we were doing.

Speaker3: That’s what we suggested then.

Frode Hegland: But this is because it’s the Peter and Leon. I hope you are not averse to the idea. I see Leon thinking really hard.

Speaker3: I see

Frode Hegland: Peter smiling from long ago. Okay, now that’s good. Okay, so. So that means we can start some really exciting things. Peter.

Peter Wasilko: Yeah. I also think that we should incorporate that metadata that isn’t part of the actual document in the corpus, so that we can use visual meta to show linkages between elements of the corpus that aren’t part of the actual items themselves. And that’s where we’re really adding a lot of value. That’s being lost to catalogs now, because we don’t have that collaborative aspect to allow recording external metadata.

Frode Hegland: Yeah, absolutely Peter. That’s crucial. And I keep referring to I’m just going to put in a link here. Excuse me. Form link to an image on Twitter. That’s the Peter contribution to be able to do exactly that, to be able to decide what is. The actual metadata, the core metadata of a document, and then external to that. How is it laid out? You know, I want to be able to save my layouts and go into from my own computer into what Andrew’s done, what Adam’s done, whatever. With that layout information, change it and go back. So what doesn’t change and what does change will be hugely important. Andrew, I’d like you to interrogate us a little bit, because obviously this is a little bit just poured all over your head. I’m sure you have questions to make sure you’re happy.

Andrew Thompson: So from my understanding this isn’t so different from what I had in mind. So there’s something in my eye today. I think it’s an eyelash. I knew we wanted to have, like viewing modes in the library that changes the layout of things, and we wanted connections between the documents. Of course, those aren’t implemented yet. But I’ve been thinking that we’re going to get to that at some point, and. Maybe because I didn’t know exactly the difference between that and this. This seems like the same thing to me. Where it’s like now we’re focusing more on the library viewing modes. It’s like, yeah, that makes sense. So I’m totally on board. I thought that was the direction we were already going. The only difference and I’ll.

Speaker3: Discontinue. I’m sorry.

Andrew Thompson: Oh, I’m just saying, the only thing is I wasn’t ready to implement it yet because we’re trying to get document viewing. But that’s going along well, and I’ll have a rudimentary version by Wednesday. It’s not going to handle all the edge cases like tables don’t render properly, images don’t show up. But at that point, we can decide if we want to switch focus from getting like all the content of the document rendering properly. Or it could be like, yeah, it’s fine. It’s just for like a preview. The focus is the library and I can switch over to that. So that’s that’s totally fine by me.

Speaker3: Yeah. Okay.

Frode Hegland: That’s very cool. Yeah. The only real change here is the scale we’re working on. So again, anybody disagree with me? Please interrupt. Just don’t don’t even do a hand. The way that I see it is that if we look at it like physics as an analogy, we have elementary particles, elementary particles. Here are authors names, keywords and titles of document. Those are the molecules in this view, right? They come together to make bigger things like atoms and all of that stuff. And the reason it’s a fun analogy is we have the sphere. So if we imagine we have one layout that is one sphere, these layouts can go together in Fabian style connected blocks. And now it looks like a molecule, right? I don’t know what any of that means. I think this will be phenomenally interesting to research, but one thing I’m thinking about is the view that, you know, I spent two days making the metadata for last year’s, obviously not the entire days, but it took a long time to manually do it as a test. So I could imagine a very simple view in XR or in a normal screen where you have all the all the papers in a list, and you can keep doing this until you read the actual entire paper there on HTML in plain text. Right? That should be possible. So it can be really boring, but all the metadata is there. And we also need to research things, meaning discuss it and test what I will we use. I think we should use AI for entity extraction, but I am also wary if we go overboard with AI that we will get nonsense. So it has to be some level of editor’s involvement here. So one of the key things all this work is about is suggesting better metadata handling from publishers. So I’m not saying this should just be a automatic process at all, but because we work closely with the ACM, you know, we present this as our perfect case study. This is what you need to do to replicate it. Fabian.

Fabien Benetou: Yes. It’s briefly on the I part. It’s just like it’s not my interest. I understand it’s trendy and there are powerful things even possible with it, and I would feel more comfortable with. I think I mentioned it chatting with you the other day with something a bit more generic, like. Document transformation then can be entity extraction. It can be summarizing all this, but I, I, I think it removes a little bit some of the so the hype if it works I understand. So yeah, just like transformation or text or content transformation, it doesn’t imply how it’s done. Maybe there is a human being fixing it, maybe it’s NLP, maybe it’s machine learning, it’s whatever. So yeah, that’s what I would tend to prefer.

Frode Hegland: I agree with you 100% of that probably. And and when I say I, I am saying it a bit. Provocatively in the group so we don’t hide behind words. However we are talking about the same thing and you know how we choose to present it later to the wider world is important. The language we choose, we can say machine extraction of entities, or maybe AI, depending on how we really feel about it later. But one of the things that bothered me before doing different views is that if you do an AI summary, it’s so useless because if anything looks interesting, you want to read the original stuff anyway, right? So where does it fit? So it really comes down to the boring level, just like what you said. Names and keywords, that kind of stuff. And with a human in the loop to make sure that the period of the name isn’t in the wrong place, etc..

Speaker3: Okay.

Frode Hegland: So now that we’re going through this, Danny pointed out something important here.

Speaker3: I’m writing this.

Dene Grigar: Let me finish this up.

Frode Hegland: Oh, you’re starting okay.

Dene Grigar: So we should list the papers that were suggesting there’s going to. We’re now talking about four. So there’s two opportunities left. And the deadline is March as May 26th. The blue skies are unprovoked provocation papers. And then there’s workshops. And one of them that fits us is the human workshop. And that’s the one that Klaus is overseeing. So I’m, I’m interested in a blue skies paper, which I laid out for us in the slack channel. If anyone wants to join me on that one, the topic is XR and hypertext, and frankly, there’s not been much work done on this topic at all. I mean, I went through yesterday and spent spent the day going through the proceedings for ACM hypertext for the last since 1995. And I only found a handful. That applies to exactly what we’re looking at. And I didn’t want to overwhelm you with like 50. So I picked the top four. And put that in the the base camp for you. So you have all this you have all the articles there you should read. We should start with reading those references and then shape our paper around that. So read first, then write the paper. So rather than say this is what we’re going to write about and then go look for papers, we should see what’s been said already.

Dene Grigar: And then, you know, come up with our research question and then a paper. It doesn’t have to be any longer than six pages, which is really short, but it’s definitely the way that I want to go. And if you want to join me on that one, that’s fine. We also have to have one for the Sloan Foundation. And that, I agree, should be the proceeding. So Frodo will lead that one. And he will set up a Google doc for that. We also have testing that we need to be doing, but I’m not whatever you want to do. I mean, certainly we got to have something we can get all right on, right. We don’t want to be passing a paper around. It needs to be something we can all share at the same time. The bibliography is not is not counted towards the page count Peter. So we have six pages total. And then Bobby says I’ll be up for riding on manipulating grammars and XR, either blue Sky or workshop. Workshop. Be fantastic if you want to do that, that’d be wonderful. And we all can submit to to human because Klaus is looking for new people to come in. He also says he’s short on papers, and so he’s really interested in what we’re doing and also beefing up his his workshop, the workshop, just so you know, the the hypertext conference is very, very weird.

Dene Grigar: Because you start with what day one is like a series of workshops and workshops, from my perspective, is people getting in into a room together and working on something together. Workshops for ACM hypertext is actually papers around a theme. And so there’s a half a day on human, a half day on another one. And there’s like two days of workshops. And then the conference starts and the conference presentations are the blue skies of provocations and the and the formal papers. The deadline for those formal papers have passed. We didn’t do that one. We didn’t make that deadline. We weren’t ready to. But the May 26th is for workshops. Provocation, blue skies, the two types. Anyway and if we do a demo, demos are also part of that. First, like during the conference time, but it’s also the later May 26th deadline that is also a paper. It can be a very short like one page thing. So that’s great, Bob. So you’d be on Blue Skies with me? Perfect. We already started this last year. We’ll just continue. So I’m hoping Mark will work with me to.

Speaker3: So. So we.

Frode Hegland: Mustn’t forget. We also need papers for our book. Welch will be a different angle for this, which is something that is literally in the Sloan Foundation thing. So that’s one thing. So When you are talking about the blue skies with the hypertext and zardini, how directly is that related to the code we’re building?

Dene Grigar: Oh, very much so. Some of the things I’m thinking about, what I was thinking about yesterday, and I got it down into a rabbit hole, which is what happens when you start thinking about these these papers. Is that most of these papers are talking about spatial hypertext in terms of objects. So it’s the way we organize information. So the information is the object that’s being manipulated and it’s visualized in some way. It’s different than a data visualization. It’s a structural visualization that’s very organized. Right. And they mentioned things like, you know HyperCard, Storyspace, tinderbox, all these different HyperCard system, hypertext systems. What they haven’t talked about yet is what happens when people are also in that same space as the objects themselves. What does that mean? For example, you know, our role as the producer of these spaces, as well as the participant in these spaces, as well as part of the part of the objects in that space, it changes the whole dynamic. That’s a big rabbit hole, because it means I’m going in a direction that not many people have looked at in this, in this field. But what is the human aspect of it? What’s the what does it mean to be in this space as a human being and be an object? And I’m thinking specifically about the persona, the spatial personas. And we’ve been doing spatial persona for a while. I mean, that meta system that we played in proto. A couple of weeks ago, whenever it was. I mean, we’re like little bobbleheads.

Speaker3: Oh.

Frode Hegland: You mean FaceTime?

Dene Grigar: Yeah. For this little bubble? No. Well, the FaceTime thing is what Apple’s doing, but I’m talking about that meta system that we were in for the presentation. Remember we did that during we did a presentation with those folks.

Speaker3: Mozilla hubs. I think it was Mozilla.

Dene Grigar: Mozilla hubs. Yeah, we’ll bobbleheads in the system.

Frode Hegland: Oh, the cartoon people, right? Right.

Dene Grigar: Well, I mean, it was cool. I mean, this is what we were I mean, this is what meta is all about, right? Because I’ve done the meta presentations too. But you are this little bubble head inside this space, and you’re interacting with each other, and you get close to each other to hear. We back away. You don’t necessarily hear what people are saying. So and then you have a screen and you’re all looking at the screen. So there is already this kind of spatial hypertext in which human beings are inside the space with the objects.

Frode Hegland: Are we going to directly support multi-user for what Andrew is building though? Would we be able to do that?

Dene Grigar: At some point it’d be nice and not part of our project, but it’s a stretch goal.

Speaker3: Oh, but no.

Frode Hegland: No, but since you’re saying. Because I wondered from what you’re talking about with the human and hypertext, what that meant in terms of our current coding. So that is a little outside of that, isn’t it?

Dene Grigar: I don’t think it changes anything because it doesn’t matter. All you know, he’s making these objects for XR and we’re all coming in through FaceTime and interacting with them. So it’s not really changing anything he’s doing. What changes is that?

Frode Hegland: But no. But the the system that Andrew is building now is a single user. User is in the center. But of it’s kind of on a swivel chair. So that’s that’s a very specific interaction. We’re also of course looking at other ones. But in terms of the specific yeah, sorry, I see you kind of have a hand up there.

Speaker3: Adam. Oh.

Fabien Benetou: Yeah.

Adam Wern: But isn’t blue sky really blue sky in kind of the long forward direction? I’m not a native English speaker, but blue sky should be what Dean is talking about. Like the direction we are going or want to go or could go, and not just the practical. Here is a proceedings as we just in XR, but we need to be much further on in the thoughts here to be to do something interesting.

Dene Grigar: Oh thank you Adam. Yes, that’s precisely it. It’s supposed to be like what? What’s possible? Like where are we headed? What’s the possible way we’re headed? Here’s where we are right now. Here’s what’s happening in the Apple Vision Pro world, or the spatial hypertext, the spatial personas. Here’s what could happen in this space with the stuff we’re doing. Yeah, but I’m focusing. Let me just finish one more thing. One more thing. But I want to focus on hypertext. That’s the key thing because the the the conference is hypertext. So we have to we can’t just talk about XR. We’ve got to talk about something that’s hypertextual in nature. So what I’m interested in, Peter, is what happens when human beings are hypertextual objects. You know, and so can we at some point with the, with the hubs that we were in, were actually we when we got close to someone. This reminds me of Klaus mother project. As we got close to one of the avatars, we could hear them. So the spatial part of it was there was this kind of spatial hypertext involvement in which we could get close to somebody and actually interact with them. As we moved away, we couldn’t hear them necessarily. Right. So there is this hypertextuality that’s already in place. And so that’s there. Right. But what does that mean? So the blue sky part is what happens, what can happen in the future and what does it mean when it does happen. Does that make sense to everybody.

Speaker3: Yeah, it makes perfect sense.

Frode Hegland: The only thing I was trying to establish is of the different papers we are writing, which one’s more directly relates to basically writing up the work that we’re building?

Dene Grigar: Well, not mine, mine is not part of that. I mean, I’m doing my own blue sky with anyone who wants to. You’re going to lead the one for the Sloan Foundation team because you’ve got that vision. This is just something different that I think will be useful. I mean, last week when we talked, he told me I was going to be doing the team one. But it sounds like you have a much better idea of how to make this work for the proceedings.

Frode Hegland: I don’t care about the papers very much at all. I care that they’re good quality and done correctly. What I really care about is that we build the system that will blow people’s brains when they put the headset on and the day. So that’s why I’m so re-energized by having this proceedings being our our main world thing, and just trying now to establish how that’s going to be written up and you know who will be the. You know the authors on that. I mean, the thing that I wouldn’t mind writing is basically an intelligent outline thing. That could be my paper.

Speaker3: But, you know.

Frode Hegland: I think we’ll need one main paper to say, this is the thing that we built anyway.

Dene Grigar: And I think you should leave that, and that’s great. You have I mean, what you’re talking about with meta, visual, meta and all of that is perfect. And then what I want to do is different from that. It’s more of a more of a blue sky.

Speaker3: Yeah.

Dene Grigar: And you’re talking. Yeah. So that’s fine. And then then we also have testing. We’ve got to do some testing there and we have some money for testing.

Speaker3: Talking of.

Frode Hegland: Testing. Sorry, Peter, just one more second. I think it’s July and Southampton is going to be doing some VR day for their staff. I’ll be going to show the Vision Pro and also introduce our work to them. So I’ll be talking to a lot of educators about this. I don’t know anything more about that. Of course, I’ll collaborate with you guys as to what should be shown that day, but it’s a nice deadline to know it’s coming up. Peter.

Peter Wasilko: Yeah. In addition to representing individual users within the space, I think we should also represent software artifacts that are applicable in operating in the space. And the main example is, is if you look at the JavaScript ecosystem, you have a control file called package.json. And that’s where you provide the metadata for all of the packages that the program that you’re building is going to be dependent upon. And that can either be done as a hierarchy of package.json files for a whole network of related projects, or there can be a single top level package.json file that contains all of the dependencies for everything. Underneath that, however many layers you go. Then Monorepo software was developed, which basically automates controlling the relationships between those different sub packages. So with the Monorepo package, like moon, you can automatically synchronize the packages so that if I change a dependency at the top level, that can then get applied to update the dependencies at all of the lower levels. The other critical element in package.json files is that they usually have a scripts field, in which you put in the command line formulas to invoke whatever tooling you’re using to work on your repository, and to avoid having to type in the individual things. You then are able to sort of abstract out over that, and you can have run dev as your command as opposed to ember, dash, dash, watch, dash, dash, output directory, whatever, and all of the long complicated details. You can just sort of subsume that in the name.

Peter Wasilko: So the Monorepo software is then able to extract those scripts from the package.json file and give you a user interface, so that instead of looking at the scripts and having to remember them, you get a little widget in your Visual Studio Code sidebar showing you your hierarchical decomposition of subprojects. And when you twist down the expansion tab on each of them, you get just the names of the scripts, and you can click on it to run that script. And it makes your workflow so much more efficient. Now, in our context, any scripts that are being applied to our conference proceeding collection to do things like maybe run an external tool to. Generate a concordance of all of the words that appear within the locus could be represented as a script. And then in our visual meta, we should have, you know, build concordance and then whatever the linkages are to the tooling so that the system could automatically do that, but that then becomes an element within the data space. And where we are having searches control what’s working. That search could be represented in a Benedictine cyberspace as an actual point. And that point would represent the query upon the dimensions at that point is that would then cause the script that we’ve linked to through our visual meta to be invoked on the overall data set. So then you’d be able to visually see what all the queries are. They’re currently operating on the data space. You’d be able to see a representation of what queries are that the users in the data space are currently operating on.

Peter Wasilko: So now may imagine a three dimensional space. And one of the dimensions is, you know, name of the sub project that the person is looking at. And another dimension could be what kind of a task he’s doing. Then I could see that at the moment Fabian is inside of sub repository for, and he’s writing a code review of the file that he’s finding there. So I’d be able to see what the other person’s looking at from the location of their representation in the space, along with what tools are applicable to that location. So that’s sort of where my thing is going, because Benedictine cyberspace is sort of tossed on the wayside, because when Benedict did that book, we just simply did not have the JavaScript ecosystem and all of the wonderful tooling to be able to implement it. We had the same thing happen in law when Stuart Sutton wrote his dissertation on visualizing and working with institutional memories and law. There was absolutely no technology to implement any of it. So he had his wonderful dissertation describing what the visualizations would look like, what the underlying data structures would look like. But because the tooling wasn’t there, nothing happened. And that work got semi lost to the agent, except for a few people like me who kept running around every 5 or 10 years. And people take a look at that. Okay, okay. Done with brain dump, right?

Frode Hegland: Briefly. I see Adam’s got his hand up. Yeah. You really, really need to pick an Uber and drive to the Apple Store and use the headset. You just have to, because the Benedictine stuff is rubbish off of it. Other half is brilliant purely because it now exists the opportunity to use it. So to figure out the brilliant from the stuff that just doesn’t work means experimenting and experiencing it again and again and again. So many things. Adam introduced a new term to me the other day, which is spiderweb. You know, lines in XR can be horrible because the spider web, looking at the Benedictine stuff, which conceptually all of it is great when you start implementing it, it just doesn’t work. And that that is what innovation is. That is not at all a criticism of his book. I think we will invite him to our book. But Peter, you desperately need to put the thing on your head, play with it. You get half an hour demo for free in the Apple Store no matter what anyway, so I think that would really inform your some of your design decisions.

Speaker3: I don’t. So.

Adam Wern: Mark and I are we are also writing a paper or are attempting to kind of taking the reference visualization we had in 2D and trying to do it in XR. It’s already working, but not that useful yet because we don’t. So we need to find out what’s most useful. But it needs a write up in paper form in order to be presented, as, you know so we were going to write some sort of short paper for the practitioner track. So so one question is how this paper, if or if it should slot into a Sloan future text stuff or if it should be on the side somewhere. So that’s a I’m just saying it so we could kind of sometimes it’s useful to combine some you have different tags and so on depending on the needs here.

Speaker3: I’ll.

Dene Grigar: Be honest with you, Adam, I think any kind of paper we do is going to be great under the guise of the gaze of the future of text and XR. The more papers we can go to Sloan with at our at our year end report and say we did, you know, four papers at this conference. And, oh, by the way, we did this and this and this and this because I’ll be also entering are also doing a demo at the Electronic Literature Organization conference in July. And I’ll be talking about the project at the MLA, the Modern Language Association, in January in New Orleans. So anytime we do any of this work, it looks really good to a funder. And if we want more money in the future, it shows output. I mean, the thing they want to know is what are how are we sharing this information. What’s the dissemination plan. And right now we have a very modest one, right. Acm hypertext one paper. Now we’ve got potentially up to four, maybe more. We also have the book is also already included in this. But what else can we do. And we have lots of conferences and presentations and things we’re doing outside of what we promised. That looks good. So if anybody wants to write a paper for ACM hypertext on your very own or in different teams, God willing, please do it. But it’d be good to reference the grant and the program.

Frode Hegland: And of course, it’s also very nice for us to be able to cite each other’s papers within the same conference. So if we collaborate correctly time wise, we can do that. So Adam, as also Randall, just 10s for you as a recap one of the things we decided on as the object of work is not just a document or an entire library. It is the conference proceedings, because that is a collection of many documents that every academic will have to go through to decide what to read and what not to read. And it also is relevant to them putting it on for that conference itself. So with relation to what you’re talking about, Adam, the way that I see it is that we build a core. Put your headset on. Here’s the conference proceedings. See? Is it a list? See it as a map. Wow. How amazing. And then you choose to interact with something older. And that would then open up what Adam and Mark have been working on. Because that is the historic breath, so to speak. So that becomes maybe another sphere, maybe another window. We don’t know yet. So the papers can also kind of overlap in that sense. And also I look forward to deciding what should be encapsulated in these little spheres. Some of them will just be plain layouts. Some of them, Peter, will definitely be the kind of code you talk about if we can do it. But what will be very, very exciting as well is if we can do the Fabian style, here’s a thing, and we programmatically somehow say how it relates to another thing. So you start having a very interesting grammar of the space. So as we keep developing this over the next few weeks.

Speaker3: We will figure.

Frode Hegland: Out more how it fits paper. And don’t forget the deadlines for the paper isn’t. It’s not the full paper that has to be written out. It’s the proposal, right?

Speaker3: And then the full paper.

Frode Hegland: Right there is a bit later.

Dene Grigar: We submit the whole thing. The whole thing has to be a paper submitted.

Speaker3: Yeah, yeah.

Frode Hegland: But in order for it to be approved, we don’t have to submit the final paper in May. Or do we?

Speaker3: We do. Okay. I thought your peer review. Okay.

Dene Grigar: I’m on I’m on peer review committee. So I’m going to be expected to look at the full papers to make a, you know, determination whether or not it’s acceptable. We also have awards committee. So be nice to see our big paper win an award. Right. So and I’m hoping Rob will be on the award committee with me if I’m sharing.

Speaker8: Thanks.

Dene Grigar: It’s a war committee for the for the Michael Joyce Award. That were instigating. So yeah, so full papers. That’s why I want to get us moving on mine as fast as possible.

Speaker3: I’m just going to copy.

Frode Hegland: A link in here. So just pasting in the chat. If you guys can have a look whenever you have a minute or rather a few minutes. Oh, no, that’s the long one.

Speaker3: I know this is very odd. Let me give you a second one. I’m so sorry. Boom. Yeah.

Frode Hegland: So the first one is going to be having a chat today. Now Adam having kids and time differences being difficult will probably need more daytime European time as well. So we all need to find out the balance between when we’re all together, when we do different groups, how we report to each other and and so on. So the second one is the one I suggested you look at. It’s quite short. Talking of which, you have to go. That’s okay. Adam. See you later.

Frode Hegland: Okay.

Speaker3: Yeah, yeah, I’m gonna have to.

Peter Wasilko: Bow at the top of the hour, too. I have to get on the road.

Dene Grigar: Me, too. I’ve got a I’ve got a meeting at nine with an artist.

Speaker3: I have a meeting at nine also. So I just.

Frode Hegland: As a brief re summary. The parkley using us as an excuse. Their Randall coming in now. But so these are all the papers from last year. We would have something similar and then we can do this kind of stuff. To see connections of things. I want this to be ready as soon as possible. Andrew. And of course, that doesn’t mean just you working. That means that we all have to decide on what database, excuse me, what metadata goes in and how we generate it, how we get it. It means that we need to discuss the basic layouts. What people should just choose themselves. Like maybe keywords. There’s something strange here. Not all of them are working right now. Have to restart my machine.

Frode Hegland: Because I expect with the different personalities in the group and the different use cases, there’s going to be very, very different ways we’re going to be interacting with this kind of information and moving things around.

Speaker3: So let a better.

Frode Hegland: Any thoughts on how we should move ahead to implement this? Because we need the metadata, the data to display and the interaction. These are very specifically different things we need to work on.

Dene Grigar: How are you planning? So with my paper I’m using Google Docs and you’re not. So how are you planning to have us all work together on one doc? What format will that be?

Speaker3: I don’t know yet at all.

Dene Grigar: Okay. All right, well, I got to go. So once you let us know in the slack channel how you want to proceed. But I would divide up whatever paper style you’re using, whatever tool divide it in those four areas, give us some particular readings that we might want to start with. We should probably look at a few things. We have references. And I will set up a Google doc and drop the Google doc URL in our slack channel for folks that want to work with me on the XHR and hypertext. Thank you, Rob, for joining me on that one.

Speaker3: Let’s talk offline.

Speaker8: Okay.

Speaker3: Bye bye. Thank you.

Frode Hegland: Right? So now we’re off.

Speaker3: Yeah, all this paper stuff.

Frode Hegland: We’ll have to get our heads around. We also need to start inviting people and.

Speaker3: The future of Texas.

Frode Hegland: Is getting a.

Speaker3: Bit. Down to the wire.

Frode Hegland: Anyway, we have the top brains left in terms of making stuff. Any thoughts guys on.

Speaker3: Not to go about.

Frode Hegland: Creating a viable system whereby a publisher such as ACM can publish with the right kind of metadata so these views can be generated.

Speaker3: What?

Brandel Zachernuk: What do they need right now? Like, I presume that the. But this is for the the proceedings. Like the the the listings of papers or for.

Speaker3: Yeah.

Frode Hegland: So my dream is very, very simple and it seems there is buy in for this. And that is we now manually do it manually with some AI assistance whatever. Right. This big thing which can be read as a linear document with every single paper let’s say it as an HTML, so you can expand and contract as you want to. And then all of the documents are defined items so that you can do it in the dynamic map view as well. In order for that to be possible, each document must at least have some keywords, author names, and I prefer also any names mentioned available. So when you select stuff you can then see connections.

Speaker3: The things you.

Frode Hegland: Select on the map also have types such as person, document, location, or event. It doesn’t seem to be that many more useful ones, so you can toggle all of the category on or off, which is very useful because when you toggle them on, they’re all selected, at least in author, meaning that you can then move them all. And you can do layout commands such as turn them into a column instantly.

Speaker3: I know further.

Frode Hegland: Stuff we need is the ability for the user to say what’s important or not. You know, some authors are important because their other friends or colleagues, some are important because they’re titans in the field and they may even be deceased. They wouldn’t have authored a paper. These seem to be special categories that are useful for layouts. For instance, if I want to see who has written about history, one of the things I can say is if Ted Nelson and Doug Engelbart are in the papers, you know, that’s a one example keyword.

Speaker3: So the whole idea.

Frode Hegland: Is what I’m desperate to build with you guys is essentially a table of interactive, intelligent table of contents or outline for proceedings. Where you can view things in many ways, but you can also read each article in full. Proceedings, by the way, Brundle, as, of course, you know, but just to highlight the relevance is exactly the same as a journal or a book like The Future of Texts. The idea is it contains freestanding articles by same or different authors.

Speaker3: Well, Andrew.

Frode Hegland: It’s worth mentioning the guys, the radar guys are working on the Jason to be able to provide you with a useful Jason right now. They just asked me some more questions, which is always good. That means they’re working.

Andrew Thompson: Is this going to be in the same format of the template that I sent in, or will this be like something they’re designing that I will sort of work around yours?

Speaker3: Okay.

Frode Hegland: I’m not saying it’s going to be that, but that’s what they’re trying.

Brandel Zachernuk: Aspirational, I like that. So if the question is how to. How to allow publishers to produce media that has the ability to be compatible with the thing.

Brandel Zachernuk: I think making sure that that we have a reasonable sort of overview of what publishers will do by default is an important first step, because it’s very difficult to have leverage over people to do anything that you need until they know that there’s a value in it. So being able to work with as closely as possible to something that is achievable on their side in the first place is a good start for us. And I think I think we’re well in into that. We have that kind of common. One of the things that proceedings don’t particularly have the ability to do, nor does a journal is Provide. It’s a level of detail, overviews of itself in terms of what the different sections of content are. They’re all sort of meant to be observed in a particular sort of way at a certain degree of remove or lack of removal. You’re right up in the content in the conference proceedings. And so you don’t need to have, you know thousand foot or 10,000 foot views of what each of the themes or the topics are and stuff like that. So From the perspective of the tool and the environment that is created to live in that work after the fact. Then having the ability to reacquaint oneself with those highest level sections and regions would be something that by default is not being provided by the book, by the by the publication or the proceedings to my mind. And so trying to attend to that as a responsibility would be important from the perspective of the tool. Does that make any sense?

Speaker3: It does.

Frode Hegland: Let me just show you this.

Speaker3: So I took all.

Frode Hegland: The proceedings of last year and made them a section each. So it’s the title author names first sentence of the abstract.

Speaker3: And this, which.

Frode Hegland: I’ll show you in a second. Each title is a defined term.

Speaker3: Which has the.

Frode Hegland: Author name so it can connect to those. And I generated summary, but also keywords and extracted names. Some reason. Here they are together. If you look at here has the name separate. Right. So that means on the first, most basic level, as we’ve kind of seen before, you can read through this. And if you find something interesting let’s say this one click here. And that reveals the second sentence of the abstract. And if you now click on it, it opens the full document to that page, the document that was cited for. So that’s a little bit of the connected thing. But what we’re talking about now is really going that next step of going into the map, which you’ve seen, of course, you’ve implemented it in webXR because then we can do things like this. You know, now, I can say that most people in the group that I work with are UK based, for instance. There are some design issues here. One thing I’m trying to do is when you select something, whatever it points to should be much bolder. Now, for some reason.

Speaker3: It doesn’t.

Frode Hegland: Always work. The selections like right here. It’s hard to see. Philip. My.

Speaker3: You know who. You know. Yeah.

Frode Hegland: Anyway, these are visual points. But what I’m trying to say is if we build an environment like this in webXR. Where you have opportunities to, you know, do layouts a little bit automatically, like context menus or whatever. I can do something like this now. Made a mess. So I can now.

Speaker3: To hide documents.

Frode Hegland: All the documents are gone. Bring them back. They’re all here. And then I do this and this, and now they’re laid out right. So you can do that in a in a space. Because doing every single thing manually is, of course, not optimal. So so that’s part of it. But then once you’ve spent the time and effort to get these things the way you want, you want to be able to obviously save these layouts in little spares. You may also and this was what Adam talked about earlier. I’ve made a map of the world just with the names, just for fun. That could be a little element that could be inserted.

Speaker3: You know, we can.

Frode Hegland: Choose who, you know. People to hide, people to show.

Speaker3: People will be able.

Frode Hegland: To choose what kind of keywords are relevant and so on. So we need the way to store this metadata. Some of it should be produced by the publisher.

Speaker3: Some of it.

Frode Hegland: When you do, the layout will be produced by the user. And that which is produced by the user needs to be shareable as well and storable.

Speaker3: And I think. We’ve talked about.

Frode Hegland: Different graphs and maps literally for years. Now that we’re doing the proceedings, I’m so happy we have a frame to do it in.

Brandel Zachernuk: As you are looking through that stuff. Just then, one of the things that I was thinking a lot about was the matrix, the balance of different families of type and scale and and weight in order to make sure that the most relevant sort of things were visible. And. You know, if you’ve ever seen on social media people writing things like you will read this first, then this second, and then this third. And they’re the type is arranged in different colors with different font weights and stuff like that. There are ways of sort of prioritizing the different pieces of it in a way that means that they simply jump out at you. I think one of the questions that. Will come up for people if they are using a system like this. Is are they reading or are they scanning? Because when you when you read, you’re committed to being able to follow something. You have a reasonable capacity for leveraging your short term memory, often at the expense of what else you’re doing. But when you’re scanning, then you’re looking for much more coarse grained signals. And so having the ability to have not just bolder things, but also just comparatively larger. And also I think that the, you know, the maximum of 30 M is a is a really good rule of thumb. It’s not 30 characters. That’s 30 M’s in a variable width font. So that can get up to 35 characters or so. But yeah, when when people are sort of viewing things in that mode, like it’s, it’s a lot easier to follow.

Brandel Zachernuk: And to your point, Frodo, like it might not always be title. It often will be, but sometimes it’ll be author to see if they are specific authors that you like and things like that. So yeah, just thinking about what what people might be doing and what what might be the most visible way to render the. The details that are germane to them in a way that sort of stands out, rather than something that they need to kind of seek out.

Frode Hegland: Yeah, absolutely. And the the notion here is and I don’t know, there’s something broken with this document because it doesn’t follow through. Right. But yeah. Everything you said, I mean, here are all the The documents. You may. After a while you don’t want to see all the documents.

Speaker3: But there will be some.

Frode Hegland: You’ve had a look at.

Speaker3: And, you know.

Frode Hegland: You read it or skimmed it and you don’t want it to appear. You need a convenient way to get rid of it. And also, you know, some people who are the people you work with, you want to see their documents immediately. So to be able to decide what is visible is important. It may not be anything like this layout at all, but what I think we need to get is to a point as soon as possible. Where? It is possible to start playing with these things. And also Andrew. Before this, I can try to export it in a different way so that you have at least this to experiment with. You know, we don’t have to wait for the next year.

Speaker3: Because some of it, some of it’s pretty cool.

Frode Hegland: Some of it’s absolutely not. Like, I didn’t know that Dini wrote so many papers last year, and right now, for some reason, the. Back and forth doesn’t work, but I can see.

Speaker3: Yeah. Anyway, we all.

Frode Hegland: We’re all excited by the idea of doing mapping stuff, right?

Speaker3: But we’re all.

Frode Hegland: Happy with proceedings as being kind of the main view.

Frode Hegland: I wanted to take what Adam and Marcus done before and be able to go back into that. That’ll be amazing. So to have a view like this and then hang on, what is this site did and then start that journey.

Speaker3: So Andrew.

Frode Hegland: What do you want for this to be possible?

Andrew Thompson: What do I want for this to be possible? I don’t know. I would need the the JSON again. It doesn’t seem that out there. It just would, you know, take some experimenting. And we’d have to go through it like. Each step like what we want to add this week. Okay. We want to have the lines show up when you point out a the author or something like that, you know, like we do it in steps. I’d say what we probably. What I don’t see. We don’t have the keyboard shortcuts, so the whole hide and show thing will have to have a different solution. And I know there was a discussion like last week about potentially removing the pop up menu. So we’ll need to have a better solution for that than if we if we get rid of the pop up menu.

Speaker3: Well, we.

Frode Hegland: Have the touch the sphere. And you have this thing right. Control panel one.

Andrew Thompson: One thing I did want to sort of pitch as a suggestion, since originally opening up the library view of the sphere was going to be like something you do occasionally to grab a document. And now it sounds like that might be the primary focus. So I would recommend swapping the functions of the wrist. So tapping opens the library and holding is the sort of debug menu thing, since we don’t end up using that menu much. And we’re going to use the library a lot. So I’m thinking of just swapping the functions. If that sounds reasonable to you guys.

Frode Hegland: I’m not sure, because I can imagine the control panel thing being where you choose what to hide and what to show and layouts.

Speaker3: I mean.

Andrew Thompson: I could see that, but also. As of right now, we’re not really using it for anything besides like.

Speaker3: Oh no, but we’ve been stuff.

Frode Hegland: We’ve been in the reference section for quite a while. We’ve had a lot of learning there. So this new thing would be. Okay. Hang on.

Speaker3: Don’t just share.

Frode Hegland: Screen again, I imagine. The new screen would open up something like this. You know, it would literally be all or maybe. Maybe like this. And then we would do interactions to get it like this. But once you’re here, for instance, if I do this and mess it up for me now. I do. Command shift bracket left. To do that. It’s a mac thing. That would be probably much better off as a button on the control thing. And if I now press V for vertical layout, that’s there. We don’t have a keyboard necessarily for what we’re doing, obviously, but these are very, very handy things. We do have this.

Speaker3: Okay.

Andrew Thompson: I’ll leave the the menu buttons as they are then for now in case we want to put the functions there. Sounds reasonable.

Speaker3: Whereas. Institution. Yeah.

Frode Hegland: So these are the workshops. You can see they’re kind of different.

Speaker3: Yeah, I think it’s going.

Frode Hegland: To be an interesting journey. The earlier meeting today did go quite far into the paper writing side of it, which is obviously important time wise. We also need how we how are we going to go about this? Because there will probably be very, very different needs, even in this group, for what should appear.

Speaker3: So what I.

Frode Hegland: Suggest we do is

Speaker3: You know, I’ll try.

Frode Hegland: To export this.

Speaker3: For you.

Frode Hegland: And you try to put it in. And if we just have a way where we separate the imported metadata from the user spatial metadata. That in itself I think will be a huge start.

Andrew Thompson: And I won’t need that until Wednesday at the earliest, because I’ll be working on the sort of rough document reading until then. So no rush on it.

Speaker3: Yeah. No, that’s that’s fine.

Frode Hegland: I’m just. I just exported an HTML. I don’t think it has much to do with the.

Speaker3: Just open out with. Text added. That’s not very.

Frode Hegland: Interesting.

Speaker3: Right. There. So, Brandel.

Frode Hegland: You’re probably the only person in the group not excited about tomorrow. Well, maybe Andrew, probably not, but I’m excited about tomorrow. There will be new Apple things.

Brandel Zachernuk: Oh, yeah, I guess there will be.

Speaker3: Yeah.

Brandel Zachernuk: I don’t know what they are. That’s a novel for me. I mean, obviously you know, everything seems to be about an Apple Pencil, but that will be really interesting to see what that is.

Speaker3: It will.

Frode Hegland: Be. We talked over a year ago about having an Apple Pencil with a Vision Pro, obviously not knowing the name Vision Pro, but if anything like that happens, it’ll be kind of interesting, but we’ll see. So they can. These things can go in many directions as they have done.

Speaker3: But yeah, but Brando.

Frode Hegland: From your perspective, you don’t have a lot of time for this. Now I understand, however, I think that from what you’ve done with the different book presentations and so on.

Speaker3: Not being able.

Frode Hegland: To get our teeth into a proceedings is very much up Brandle’s Street.

Speaker3: Macos.

Frode Hegland: In addition to the kind of stuff you saw, this same metadata should be used to have spatial layouts. Yeah.

Speaker3: Yeah.

Brandel Zachernuk: Well, so, so that’s, that’s that’s where, you know, my feedback related to level of detail is sort of Located is. Trying to figure out what it is that you can do to provide presentations at a certain, at a certain point in a viewer’s or your user’s relationship with the with the content. You know, a book as an artifact does a couple jobs at a different at a couple different levels. Obviously it contains all of the text. But it also stands in stands in as a, as a fiducial, as a, as a marker of the contents of the text. And, you know, a number of the time we use a book use like we’re actually not even consulting the text, which just, it just happens to be the thing that we know has the text in it if we were to look for it. And so thinking about those different levels and making sure that they things can remain legible between them, like. It’s what? We sort of. Forget that. They don’t have to be the same thing at all times in order to do that job. And the beauty of a digital space, digitally mediated information space, is that you can pick the right entity for that job, providing that there’s enough conceptual, philosophical continuity between what it is when you use it as this versus when you use it as that, and you can transition between them. So yeah, like I think being able to pursue.

Speaker3: A.

Brandel Zachernuk: A space that allows you to make decisions. Because the thing is about about what we’re talking about with the proceedings, with all of these things is that we’re not simply presenting them in a, in a neutral context in the sense that like. That’s a job. I say the same job for everybody and we can do it better or worse. What we mean by that is that we want to provide. We want to furnish a user of our system with the ability to make choices and glean information for making better choices about what they’re doing with the contents of the text, and orient and navigate, orient themselves as they as they navigate the different sort of chunks of information and the choices that they’re making with regard to how they cleaving documents apart, if necessary, marking those things up like it’s a very active process. Even when people aren’t physically manipulating things, they’re creating an orientation in relation to all of these things. And so.

Speaker3: That.

Brandel Zachernuk: That like, whereas the user and all of this is the thing that I think is is the most. Like our. Vocabulary for understanding that that is the goal of creating information space is the most

Brandel Zachernuk: Starved. At the moment, because we have this sense that, like, you got papers, that’s just what they are. And like, the user is an observer is a sort of a neutral party to what it is that they’re doing amongst all of those things. And impoverished is the word I meant rather than starve and that nothing could be further from the truth. So. Yeah, that’s just what I’m thinking about at the moment. Is that the embodiment of that of the the user and the choices they make and how to how to bring those to the fore.

Frode Hegland: Yeah, absolutely. That is super important. So this makes me upset again that Alan Laidlaw is not active in the group anymore because he was good at pointing out those types of things. So that was that’s a loss. I hope he’ll come back. I do text him every once in a while. The two other things. One is I came across this weird thing the other day that that led to.

Speaker3: Some people talk about.

Frode Hegland: Ease of use. Doug Engelbart, as you know, hated the notion of ease of use. But of course things cannot be difficult. But this other thing they talked about tape cassette recorders from the 70s and all kinds of things, and they had this great perspective of if you want your product to be successful, your user should feel that they are good at it.

Speaker3: Right, which is an amazing framing.

Frode Hegland: I think, like if it’s a computer game, you should be allowed to feel that you’re good at it. If even if it’s a pair of scissors, you know the weight, the balance, you should feel you’re good at it. In other words, it should convey a feeling of skill upon the user. Without being patronizing. Isn’t that fantastic?

Speaker3: So I’m using that.

Frode Hegland: Thinking now with author. Someone you sitting down don’t care about this stuff. What can I do to help that user feel like they’re good at it? I mean, of course with a pen or a sword or all these things that are specific things you can do, how can we do it in software? And that relates extremely much to the second point. I keep talking about the importance of the right resolution with photography, and last few days I finally found out why I think it’s important. It comes from audio. Friend of mine is married to an audio engineer. It turns out they realized that she gets a headache if she listens to low bitrate MP3’s, because the brain has to do too much work to fill in the gaps. I think when we’re looking at an image like we’re looking at each other now, which is low resolution. When we’re looking at each other, we cannot do more than glance. If we’re looking at a. Retina quality, human high resolution image, photograph or whatever, and it is of the quality where there isn’t anything obscured, our eyes can rest. We don’t have to make up for what’s missing. So on an artistic level, I think that is beautiful because it allows you to really look at something and not just look at its splodges, but look at the thing as much as if you’re there. Time is frozen, right? This relates to our work in spatial thing, in the sense that I really want us to develop a system for high resolution thinking. That in the future, if you worked in this space and you take it off, you will feel that you’re looking at a smudged version of your world. And in parallel, we need to build the interaction so that with a little bit of training, the user should feel proud that they are good at it. Good at high resolution thinking.

Brandel Zachernuk: Yeah. No, I think that’s reasonable. Yeah. I’ve I’ve been given that feedback about about giving people instructions on stuff that you shouldn’t tell them that something is easy. You should make them feel like they’re badass and awesome for being able to follow what you’re what you’re doing. And try to make it easy. As easy as possible. I also remember talking to a bunch of folks who worked at Zynga when it was sort of. A more relevant affair to sort of contemporary culture, the people who make Farmville and things like that. I think Farmville still exists. And Yeah. About the fact that they were in the business of making games. That you felt like you were good at, with a minimum level of effort. And in fact, a lot of, a lot of people do that. Warcraft three, in fact, is an incredibly easy game to play if you if you half ass it and it just provides dynamic feedback about how difficult it pushes back on you. So, yeah, that that sense of, of, of mastery and competence and capability is a really important thing to be able to imbue. In video games, people call it juice, you juice. The, the interaction.

Brandel Zachernuk: Then you add, you know, camera shake and sparkles and particle effects and stuff like that. And that kind of drama is It’s absolutely critical for being being able to give people a sense that these actions are significant and they’re important. And one of the interesting sort of questions of application design is that, you know, presumably Adobe Photoshop is not juicing interactions, but but maybe it should. It certainly takes long enough on start up these days that that it’s it’s not completely neutral with regard to the way that it’s sort of presenting the cost of operations. So yeah. Like I. I kind of think it would be good for them to To think about, like the conceptual function of the of the capabilities they’re providing as, as well in order to sort of characterize and contrast the feeling of those because to your point, like. How do you make people feel? Awesome is. You give them feedback about what’s awesome and what’s kind of a desirable set of actions within the context of a game, or in the context of an interaction with a, with a with a platform. So I think video games have something to do there.

Speaker3: Yeah, I.

Frode Hegland: Think with video games, it’s interesting. I only play battlefield. I don’t want to play anything too involved because it’ll take too much time. But there I can go in and I can shoot around, run around and blow up things and finish whenever I want to. And so part of it is when I managed to get a oh my God, I went around the corner and I got you that kind of thing. That is clearly a reward, so to speak. So it’s interesting talking about being told what is good. You know, I’m a keyboard shortcut user like you guys.

Speaker3: Many people.

Frode Hegland: Are not. And I remember Vint Cerf looking at some of these things, and I try to tell them really what’s happening. And he thinks that I’m such a wizard. I’m not a wizard, that’s the whole point. But that is because it’s not his way of working. That’s what it looks like. So that’s why we have to try to for our new interactions here. Provide the same fluidity, but don’t have this. You watch someone else do it and you think they’re an expert and you’re not an expert. This is probably why this touch here and get this beautiful, simple control panel to change the views will be very important. I’m a huge fan of context menus, but they got to be short and they got to be relevant.

Brandel Zachernuk: Yeah. Yeah. And I think one of the it’s interesting that reflecting on the disparity between sort of cinematic use of virtual and augmented reality, spatial interfaces and computing and the practical reality of what people do to to actually. Facilitate interactions within any program. So again, like Minority Report or Iron Man versus Photoshop or Excel. And you know, the reality is that we’ll do excel in VR, but it’s going to look more like it’s going to look more like Excel than it is, like Tony Stark, stark sort of creating arbitrary awesome hand cannons and things like that. And so it’s it’s worth kind of being aware of the. The practical realities of what kinds of jobs people need to do. In order to. In order to imagine what the spatial equivalents are. But I do think that there’s going to be some interesting sort of gestural space in order to facilitate things and, you know, like, to your point, Andrew, about the, you know, the frequency of one set of actions versus another and whether a dwelling pinch versus a momentary pinch or a pinch and tap or a tap and hold versus a momentary tap are the better sort of Are the better actions is part and parcel of constructing this kind of gestural lexicon of action, set as something of the Tony Stark Minority Report. Whatever. But is also is also meaningful enough to be able to do the job of actually raising cells and being able to create pivot tables.

Speaker3: So on a.

Frode Hegland: Practical level, I guess the answer you already have a lot to do for Wednesday. You’re working on individual document displays primarily, right? Then we need to work at. What’s in the document? Metadata. What’s in the space? Metadata. We just need to start saving it. And if you just want to start experimenting with that under by writing down coordinates in your way into a JSON, absolutely fine by me. If you want more dialog or help on that, that’s absolutely fine too. Because we need some of these discussions with the wider team. And we say, you know, like Peter goes into some strange perspectives and, you know, when he talks about kind of embedding code in these things, he’s right. At some point we need to do that. And so it shouldn’t be too simple. I don’t know.

Speaker3: Robin, you’ve.

Frode Hegland: Been very quiet. What do you think about all this?

Fabien Benetou: I think it’s It’s. Yeah. About juice or empowerment or. It’s it’s exciting when you can do more with it. Whatever that thing is, being X or being programing, being writing. So until we get that point of being able to being surprised by being able to do more it’s a little bit not blind faith, but it’s yeah, we power through because we know it will be that next step. And until we can make others who don’t care about it have that feeling that through the tool or set of practices or both there and powered it’s a bit like swimming upstream. And I think people are right to be skeptical, let’s say. So I think if, if we don’t want to either make. People feel powerful without it being justified or feel bad because they don’t get it, then it’s that. The special moment of, oh, well, I never thought I could do this, and now I can that. I think that’s that’s what That’s what we will get people excited and then can give us more exciting feedback or. Yeah, not yet another feature, but the one thing that they need.

Brandel Zachernuk: Yeah. One of the things that make that that I think about in that context is so I don’t know if people ever call it this outside, but when you drag stuff around in visionOS and you drop it in, not in application they call it dropping it into the void. So, you know, you can you can I think in public, you know, you can you can pinch and remove like, an image and then it’ll go into a photos mode as it’s put out into the space and it’ll be rendered in quick look in a, in a way that is functionally equivalent at some level to downloading the image and then pressing space and having it open in quick look or double clicking and having it open in preview. But They sort of entertain us, the independence of that artifact as it exists, as a consequence of those set of gestures, is one much more immediate in the sense that you can drag and you drag it to nowhere, you know, whereas in on macOS today, there’s nowhere that’s nowhere. It’s either a finder window or another application and depending on what you do to those things. So if you drag something into Google, Google Docs or into Microsoft Word, obviously there’s a, there’s a function for that. But all of the pixels are something, right? They’re the terminal there, the whatever. But on visionOS there’s the void. And when you, when you drag something into nowhere, then it becomes. Itself as currently represented by finder.

Brandel Zachernuk: But on the other hand, also like, it’s already an independent artifact in the sense that it it exists in and of itself. It’s a file. Be it an image file or whatever else. I’m just thinking about the, you know, everybody’s had this moment, and I don’t know whether people have done this, this, or macOS, but certainly in windows back in the day, in my teens and things, I had a clipboard manager and I had the ability to manage multiple. Documents with multiple snippets and provided the slightest bit of context about where they were from. But nevertheless, it was still an independent application that kind of gave gave reference to those things so that you had a history of them. And so it became another space. But the idea of being able to kind of invoke the thingness of something by just dragging it somewhere, I think is really interesting for being able to separate it. This is, you know, relevant within webXR where you have full control of it, but also in a general spatial environment like. Going from a piece of information being a thing that sort of is compositionally part of a document to itself and a reference that you want to keep alive. I think it’s really interesting and what you what you do with that in terms of composing and crafting a space that makes sense of things for you is, I think, a really important question to pursue.

Speaker3: Have you?

Brandel Zachernuk: You’re playing with dragging stuff out.

Speaker3: But right now, I’m.

Frode Hegland: Actually just Here, I’m on the moon listening to you, but I’m also watching a video shot with a headset outside with a fire pit. Just this weekend. And I’m just thinking about the notion of space and resolution while we’re together now.

Speaker3: It is such.

Frode Hegland: A different thing. And so yeah, what we managed to do, I think I told you guys last week the table of contents. Is the outline now in?

Speaker3: Excuse me.

Frode Hegland: An author is now free floating window.

Speaker3: Which is nothing more.

Frode Hegland: Than a kind of a tech demo.

Speaker3: We were there, right? Yeah, yeah. We’re listening. Oh, it’s just that.

Frode Hegland: The screen disappeared over. Couldn’t see you. I need to have you back. Sorry. One second.

Brandel Zachernuk: You and visionOS 1.1 or 1.2.

Speaker3: Good question. Let me find that out.

Brandel Zachernuk: It doesn’t matter. Particularly, I don’t know that there’s any particular I don’t have differences. I the big one for me was 1.0 to 1.1, which that webXR update that we that I wrote the update that I wrote that post for.

Speaker3: You wrote. What? But I’m on one. I wrote that.

Brandel Zachernuk: Right? Yeah, yeah, yeah, I think you would have had it pushed push up. No, the webXR transient input. The the eyes and hands. Natural input support. Introducing natural input support for webcam Vision Pro.

Speaker3: It’s a little bit.

Frode Hegland: Annoying that the eyes here are not always active. I put it on a friend today and I could see my eyes on him because I forgot to do guest mode, but now you can’t see anything.

Brandel Zachernuk: Yeah. It’s not saying our creative well enough. Maybe if we get up.

Speaker3: I love it.

Frode Hegland: But you can’t see my eyes. You should be able to see them here, right? It’s nothing.

Speaker3: Well, it.

Brandel Zachernuk: Only does that if the device recognizes another person by using power. Otherwise.

Speaker3: Really? Yeah. I did hope they wouldn’t have been.

Brandel Zachernuk: And there wouldn’t have been your eyes, Frodo. They would have been guest eyes. It’s just that maybe your your eyes look enough like guest eyes. That that it screw them up. I would expect that they.

Frode Hegland: I made a mistake. You didn’t go through the setup procedure, so he had to type my password to get in. That’s what I mean.

Brandel Zachernuk: So persona is still a locked. A given registered persona is still locked to to Optic ID, so even if you have the password, you won’t be able to use their persona because I people don’t want. Anybody to be able to be falsified.

Fabien Benetou: At least in in a couple of versions ago. That was possible because I, I saw some people with my eyes.

Brandel Zachernuk: Interesting. Interesting. So it’s a possibility that the eyesight picks it up, but it’s not.

Fabien Benetou: You can have indeed. You can log in with the passcode and you can have ID with eyes as optional. And then when they do this persona, at least again, a couple of weeks ago that was possible.

Brandel Zachernuk: Now that’s interesting. So I would I would suspect if they went into persona that they wouldn’t have been able to see your actual registered persona through that process at least. But but eyesight may be slightly different, given that it would be difficult. Like you would need to have like, identical twins, save for eye color or something as a thing that they would be falsifying if like if you can’t see through the rest of the face behind the Vision Pro that this person is not that person, then perhaps there are bigger problems afoot.

Frode Hegland: So can you see my eyes now?

Speaker3: No, no.

Fabien Benetou: We see a photo of your son, though.

Frode Hegland: Oh, you. Oh, so you’re saying that shared. Okay. Because

Speaker3: No, no, no, we’re saying.

Brandel Zachernuk: It reflected on your on your headset because of how to.

Speaker3: Cheat by.

Frode Hegland: Making him the the person. Let me see if I can go back to you guys. Hang on. Let me do speaker view. So. Okay, Brandel. You’re up. Convince the headset. You are here.

Brandel Zachernuk: All right. Well, so one of the challenges is that the device also makes use of a true depth sensor. So unless you have the ability to to project me onto a pincushion or some kind of set of spatial displays such that it has the size of my nose relative to my, to my eyes here, then I perhaps this is going to be something of a fool’s errand.

Frode Hegland: The thing that I found with the headset recently putting it on different people’s heads is, first of all, it is a computing platform. Not everybody gets it. Some people do, some people don’t. That’s been a bit of a surprise. As I’ve said before, the the environments is what people really go wow over, which is kind of ironic when they can’t do anything is what they think is the best.

Speaker3: Sorry.

Frode Hegland: Just trying to see if Emily can come up and be a human here for me.

Brandel Zachernuk: It’s fine. I will relay compliments to the people who make those spaces. Have you seen keynote is available for you, right? You can use keynote.

Speaker3: Yeah, I.

Frode Hegland: Just saw that. And I was going to show that to a friend today. I’ve known him basically my whole life, and he finally comes here, and then I Yeah. Yeah. Anyway, he didn’t have time, but it is very, very impressive keynote to be able to stand there on the stage. But you should be able to have someone in the audience at some point.

Brandel Zachernuk: That would be nice. Could you not? I mean, it’s believable that you can’t I haven’t used actually haven’t used keynote maybe at all. What?

Speaker3: It is called an visionOS. Yeah. No, it’s super.

Frode Hegland: Cool in here. The strange thing is, you know, we get this thing to have on the headset to protect this, but what about inside? Nothing. So I use this. I wrap this around it when I moved about. Just a big lens cleaning cloth.

Speaker3: But yeah, that’s fine.

Brandel Zachernuk: Well, I don’t even get that. I, I get this. So. So you have one over me on that.

Speaker3: That is.

Frode Hegland: Stick it in my camera bag so it looks like a normal thing.

Brandel Zachernuk: Yeah, I think this was for. No, actually the last the last prototype that didn’t look like a production unit was a lot. Bigger. So this is actually has only ever been for this scale of unit. But I guess it’s just so that it doesn’t look like it. And it’s sort of Robust enough to be able to fall off my bike.

Speaker3: Yeah, exactly. That’s impressive stuff.

Frode Hegland: But it is a bit heavy. I found that if I use a hoodie on the right, it helps quite a lot. That’s not always practical.

Speaker3: Right. So.

Frode Hegland: With the with the immediate issue at hand, I guess probably it doesn’t seem very relevant to you in the beginning or does it? And we’ll obviously make sure you get the same data so you can play with it in your world.

Fabien Benetou: I did not understand the data. You mean like the JSON for the proceedings for hypertext 2023?

Frode Hegland: Yeah, but also the spatial layout.

Fabien Benetou: Yeah. I mean, you do send it. It’s it’s not the format I think with last time, but it I don’t think there should be a huge difference. So should should be interpretable.

Speaker3: You know. Andrew, any.

Frode Hegland: Topics or questions from your end?

Andrew Thompson: Nothing new. Really? No.

Speaker3: Yeah.

Andrew Thompson: So just just working on the document stuff on my end.

Frode Hegland: So the way we’re going to deal with document uploading for now is simply the click to upload the JSON file. Right.

Andrew Thompson: Yeah, I think that’s that’s fine for now. Yeah. Right now, mine only supports that for the workspace. So you can save workspace layouts. I think you’re talking about wanting to also save library layouts now as well. If I’m understanding correctly, you say like create layouts and I’m not sure exactly if you’re saying like we as developers create different layouts, then save those as presets. Or if you’re saying like the user can create a library layout and save that. I guess I should ask for clarification there.

Frode Hegland: Yeah, that’s a good question. I’m saying both, but primarily the user.

Speaker3: So that. Gotcha.

Frode Hegland: You know, once you got all kinds of things everywhere, it’s really annoying to have to lose it, which is one of my prime issues with the Vision Pro you can put things really make your room amazing and then you have a restart and it’s all gone.

Speaker3: You know, it’s a version one.

Frode Hegland: Product, so fair enough. But yeah, that is important. Fabian.

Fabien Benetou: Yeah, a quick word on this. That was kind of the demo I wanted to to earlier. I won’t do it with you guys because you’ll get it without me bothering with screen copy and all, but the idea is, at some point I’ve made a basic game on how to, let’s say, Minecraft, something like this, where you have your cubes and you put them in place, and then you have four different of cubes. So it’s more for like a voxel paint thing. Very simple. Not even snapping. But indeed, the thing was, if I reload the page, I lose everything. And what I’ve done is I took all those voxels con string the stringified like a JSON. And I put this as the hash in the URL. So I got my page.com/hash and then all the cubes with their position, rotation and color. And of course, if you have millions of cubes that’s not going to work. But if you have couple of dozens or even maybe hundreds that fits. And what? So it means if I refresh the page, everything is there in place, but also that URL, if I put it in the chat, I might actually have a URL like this.

Fabien Benetou: Now but if you open it and it’s there, and I think as a very basic way to both maintain the state and having it shareable. I mean, it works easy. And I think in term of WebEx or not, just being like, oh, it’s cross-platform or like using the URL as a first class citizen I think it’s quite powerful. And I think it is surprising when people like, oh, you got the URL which has all the content in it, you don’t need to install anything, but you also have some content right there. I think it’s like the same way. For example, if we look at the zoom URL, I mean, I’m using the web page, but if you use the app, it’s hidden, but you get like the room and then the code to access it. You didn’t have to select any room or input any code. So I think that that kind of solution, even though not perfect, it’s it’s still quite powerful and makes web mcsauce special.

Speaker3: It’s trying to fix.

Frode Hegland: My file here.

Speaker3: Yeah. Actually, I’m going to share.

Frode Hegland: A screen with you for a second. So this is what you saw earlier, but it’s. I copied it into a new document. This is a complete mess.

Speaker3: So the first thing I’m going.

Frode Hegland: To do is hide all the people. And I’m going to bring them back.

Speaker3: And align them on the left.

Frode Hegland: So here they are. That kind of chunking I think we absolutely must have.

Speaker3: Because now it’s out of the way.

Frode Hegland: What I want to do is be able to change the the font size, which I can do here.

Speaker3: Not so easy.

Frode Hegland: Here. Yeah, it is a bit smaller, isn’t it? So now we can do auto layouts a little bit. So I just do this.

Speaker3: And press V.

Frode Hegland: For vertical boom.

Speaker3: So now we’re going to.

Frode Hegland: Do this one doesn’t work with the keyboard shortcut, but documents hide. Take everything else, move it over to the side here so I can. Dobson space.

Speaker3: And then I bring.

Frode Hegland: Back documents again.

Speaker3: For the.

Frode Hegland: The same kind of layout, but the other way. No. This way, this way. These are all the normal keyboard shortcuts you would have in a normal Mac app, but I don’t think we can rely on people caring about that. So that’s why I think it’s so important to have these types of commands on a big thing at the bottom of the screen, the triangular Andrew area.

Speaker3: So here we.

Frode Hegland: Have a location.

Speaker3: You move those over here.

Frode Hegland: See the selection of work. Yeah. Here we go. Isn’t that. That’s better. See, now it’s becoming useful, right?

Speaker3: User workshops.

Frode Hegland: So saying may just fiddle around with these obvious things.

Speaker3: A. This is the keyword. What are your thoughts.

Frode Hegland: When you imagine this in a better space with more controls?

Andrew Thompson: Yeah, I can see it working well. Well Of course, like, right now, looks like some of the commands say vertical snaps to your screen resolution. So that’s not going to be a thing in VR. You don’t want it to just wrap all the way around you. But we can of course, set any kind of cap we’d like.

Brandel Zachernuk: Yeah, well, the way, the way that I think it looked like at one point what you did, Frodo, was use extents of the upper and lower bound of the selection in order to set the frame of how they should be distributed. And that’s that’s how you distribute objects in illustrator. Like I have a lot of experience with illustrator and. Yeah. So that that’s a that’s really interesting. It also made me think about, you know what, when you’re saying Fabian about Earl fragments or, or hashes and providing layout information within them, one of the things that we tend to do because we can in computing, a lot of the time is, is just be super, super verbose about saying here is the exact x, y coordinate of all of these things when in fact there might be a, a briefer, terser representation of that conversation. Not conversation of that, of that like the thing kind of thing that you would say in conversation about what you’ve done is I’ve made it so all of the names are on the left. And so having terser representations of those constraint applications is, is a really interesting thing to be able to use as a both for brevity with regard to something like a URL fragment and for the for the purposes of being able to conceptually encode and describe what it is that a piece of information has what a layout is, is in support of. So I think that’s, that’s an interesting. An interesting thing to play with. It’s just like rather than obviously, you know, your application needs to have the full, you know, X and Y of all of the objects trivially like in order to present them in that place. But in terms of what somebody has done, by virtue of making a specific selection and aligning them or, or anything like that, like. Being able to have that. Described as well as sort of defined down to the nth degree as an interesting. Line.

Frode Hegland: So one problem with the current one is that you see these lines going here into the papers. You can’t see what they point at. So what I’m going to try to do for my Mac version is let’s say we’ve selected Alan Kay. And the definition of Alan Kay is the word USA. So it highlights the USA. So we may or may not even need the lines. The lines have a secondary value, but they’re not the whole story. If something points to Alan Kay, it’s also highlighted, but not as much because this is kind of useless.

Speaker3: Well, it it.

Brandel Zachernuk: Gives you directional information. And, you know, that’s one of the things that I was talking about earlier is this idea that Actionable, directional information allows you to take make decisions that lead you on to a subsequent step. And to that end, it’s it’s not useless, you know, that it’s in that approximate region. But I agree, bolding and other things like that could assist with being able to sort of target the next action. But one thing that a bolding can do that made me think when you were talking about Denise, how prolific Danny was with authorship last year is have something that actually annotates the degree of connection once you have some body of things selected. So if 20 of the objects that you have selected have a degree or have have a connection to, say, narrative then that’s reflected in some state on that on that node outside. But yeah.

Speaker3: Yeah, we need lots of good stuff.

Frode Hegland: We need to experiment with how to make that useful. You’re absolutely right. So one thing also if I now top right here select spatial hypertext. So strong line down to Mark Bernstein because in the definition of spatial hypertext it’s a reference to Mark Bernstein. Right. But when Mark Bernstein is kind of along other things it’s hard to see. So you’ve already seen this design. But what I also think we need to do is to provide a nice mechanism whereby the user can say, select the children off.

Speaker3: Or something like that.

Frode Hegland: Or copy everything that I have taken. Plus what? It is connected.

Speaker3: So this this.

Frode Hegland: Is where it’s beginning to get kind of interesting with some kind of submenus or whatever. And also the different categorizations, like the people here on Bush and Mark Bernstein. They haven’t authored any papers, but they have referred to them, so they’re not the same kind of person.

Speaker3: Right.

Frode Hegland: That’s yet another thing. If you want to know about who’s written about history, chances are in this field they will have cited him. So seven hypertexts his geographies and here.

Speaker3: To to be able to.

Frode Hegland: So Apple has the term spatial. I like the term high resolution. Just different ways of looking at the same thing. But the point is. This can easily become a toy. It can easily become a demo. I really think it’s obviously our our Sorry, Bruce on texting. I’ll read that after our responsibility to make sure that it’s actually useful.

Speaker3: Yeah. Yeah. Well

Brandel Zachernuk: As you’re moving those things around, something else that made me think about is the way in which you could possibly augment or alter the state of the of the connected nodes based on the proximity. So we can see that collective writing is connected to something down there, likely to be the co-constructed readings of, of the internet or something like that. And if you drag your mouse down toward it while you have collective writing selected, then you could have some Some activation level reflected about that, that node in connection where the two, the two together. So I think that yeah, like I was saying about juice. Having a sort of a. A repertoire of ways that you can represent activation or connection or density or, or importance would be an interesting thing to play with because yeah. Like if all of these entities kind of live together, they, they have, they have. They have attributes. And those attributes, even if they’re not like, normally things are spatially. Within the same connected graph. So if you look at a map of London, then, you know, Trafalgar Square is normally where it is and it’s not out. It doesn’t suddenly sort of jump out to be by Wimbledon. But and so when we, we have something that we can move around arbitrarily, we lose that. And as a wayfinding device it sort of goes away. But if you if you have a set of attributes that you can rely on, that like the Trafalgar Square node always behaves the way it does, then that sort of has the benefit of allowing you to see that it still what it is.

Speaker3: Yes.

Frode Hegland: I think that is crucial. I think that navigation needs waypoints.

Speaker3: You need something to hold on to?

Frode Hegland: Absolutely. So one of the things Adam said earlier when I played around with this, because this is a map of the world made by me just for fun, just to Canada, USA here, I think. Yeah, Brazil should obviously be there. This is a kind of thing that should be able to be its own unit. So you could move this in and, you know, that kind of like a stamp, and then you take it out again. So you should be able to have specific things like let’s say you want authors on the left like we happen to have here. That would be one of your standard things. But I’m so excited to actually have this built now so we can start experimenting with what feels right.

Brandel Zachernuk: Yeah. Definitely. Definitely.

Frode Hegland: Love the responsiveness of this. It’s completely smooth.

Speaker3: It’s absolutely insane.

Brandel Zachernuk: We’re seeing it at sort of four frames per second just because it’s over zoom. But that’s that’s wonderful. It’s really important to be able to have that responsiveness. Yeah.

Speaker3: It is.

Frode Hegland: So yeah. Good chatting. I hope we can properly get on with building.

Speaker3: Yeah. Yeah, that’s what we’re good.

Frode Hegland: So I hope to see you guys on Wednesday.

Speaker3: What am I doing?

Brandel Zachernuk: I can probably make it. So far I do not have a lot going on, just going through the final throes of having everybody approve what I’m going to be saying for.

Speaker3: Rwc.

Frode Hegland: That’s going to be interesting.

Brandel Zachernuk: It’s just. What is? Websites. On visionOS, you know, we have every year, you know, we we have not even, like, necessarily new features. Something that I want to talk about is and I’ve mentioned in this group a bunch before is the fact that we have speech synthesis. Like that’s neat. Then it would be worthwhile people thinking about how they can leverage it and and speech recognition, stuff like that.

Speaker3: Yeah, I.

Frode Hegland: Mean, I would like to see websites as being nodes in a network and have the.

Speaker3: And to be allowed.

Frode Hegland: To view the connections and such. So it’ll be interesting. Fabian, are you here or are you there? Are you showing us something, or can we adjourn?

Fabien Benetou: I’d love to show you something, but it’s too late. So I’ll tease you and I’ll show you something next time.

Frode Hegland: If it’s a minute, I have a minute. If you want to show.

Fabien Benetou: It’s never a minute.

Speaker3: Okay? Okay.

Frode Hegland: Let’s make that an early item on on Wednesday. Then I’ll put it on the agenda.

Fabien Benetou: Actually, don’t don’t, don’t. Because maybe everything will be broken then, but

Speaker3: Put it in anyway.

Fabien Benetou: I think it’s something that both of you specifically would would be curious about, but I want is more excellent.

Brandel Zachernuk: I look forward to it.

Speaker3: I’m putting in.

Frode Hegland: Fabian. Small demo. If it’s not ready, it’s not ready. But at least you know it’s there. Denny likes us to stick to the timeline. All right. See you Wednesday, then.

Fabien Benetou: Have a wonderful day. Bye bye. Bye.

Speaker3: See you.

Chat log:

16:14:17 From Adam’s iPhone : 4 papers even, Mark & I may do practioner one

16:15:36 From Peter Wasilko : Of course!

16:16:27 From Peter Wasilko : I love paper writing so our work doesn’t get lost to future researchers.

16:16:58 From Leon van Kammen : stupid question: what is the difference between a journal and proceedings..the audience?

16:17:31 From Leon van Kammen : OK

16:19:14 From Dene Grigar : Proceedings are papers that are presented at an academic conference. The ACM Proceedings are peer-reviewed and considered top-tier

16:20:11 From Dene Grigar : A journal article is a research paper published

16:21:15 From Leon van Kammen : TBH it sounds a bit frightening to write for..a top-tier peer-reviewers 😅

16:21:54 From Leon van Kammen : Im good with it

16:21:56 From Leon van Kammen : 🙂

16:22:40 From Frode Hegland : https://x.com/liquidizer/status/1787443692703273440

16:23:19 From Dene Grigar : Generally with scientific papers, there is a team of authors. This is what we are suggesting with ours.

16:27:23 From Frode Hegland : Entity extraction, not analysis from AI 🙂

16:28:21 From Dene Grigar : So, we should list the papers we are suggesting that we are submitting to ACM

16:29:05 From Dene Grigar : There are two opportunities left: 1) Blue Skies, provocations; 2) Workshops (i.ed. HUMAN

16:30:25 From Fabien Benetou : I’d be up for writing on manipulating grammars in XR, either blue sky (if appropriate) or workshop (assuming participants would actually try, hands-on)

16:31:03 From Peter Wasilko : Is the bibliography counted toward the page count?

16:31:17 From Peter Wasilko : Phew!

16:31:47 From Fabien Benetou : wherever you say is more appropriate, trusting your experience

16:31:58 From Rob Swigart : I’m interested in blue skies….

16:33:27 From Fabien Benetou : I was suppose to write sth for you Frode… forgot what :/

16:33:44 From Frode Hegland : Reacted to “I was suppose to wri…” with 😂

16:35:22 From Peter Wasilko : I think we should revisit Benediktine Cyberspace.

16:35:24 From Fabien Benetou : Horizon?

16:35:45 From Fabien Benetou : yeay, Hubs… except it’s kinda dead :/

16:36:57 From Peter Wasilko : Your avi could be a Probe showing everyone what slice of the higher dimension space you are viewing.

16:38:51 From Peter Wasilko : We need to cite Chat Circles then

16:39:02 From Dene Grigar : yes

16:50:09 From Dene Grigar : brb

16:51:42 From Fabien Benetou : can do a (short, hopefully) demo in terms of what could be specific to hypertext, helping show it’s not “just” XR

16:52:25 From Frode Hegland : https://youtu.be/xM4CXK-O1kQ?si=4crzn6LmtbpZrDGs

16:52:44 From Frode Hegland : https://youtu.be/3NYtvzE20OQ?si=GR4ehVBYyv4LcuBR

16:53:04 From Adam’s iPhone : Have to run to my scouts now – will rewatch recording

16:53:14 From Peter Wasilko : Unfortunately, I am going to have to bow out at the top of the hour, I need to be on the road shortly.

16:53:32 From Leon van Kammen : I’ll have a date with a saladbar in 7 mins

16:54:36 From Peter Wasilko : I will see everyone on Wednesday. Enjoy the rest of the session.

16:54:39 From Dene Grigar : I need to get to my 9 am.

16:55:05 From Leon van Kammen : I have to go now, thanks for the news & updates! cheers!

17:54:29 From Andrew Thompson : I need to head out for my next meeting, take care everyone.

Leave a comment

Your email address will not be published. Required fields are marked *