15 May 2024

Frode Hegland: Hello, Adam. Oh, Fabian and Adam first. That’s good. Fabian. Okay. Just briefly, even though this is on the record, it’s not a secret thing, but Dina and I just had another meeting. And the effort that you guys are putting in, we’re just basically split first six months of the year. Adam. Second six months of the year. Fabian. And we’ll figure out what you feel that will mean. Simple, right? Because you’re kind of amazing. Are you at your dorm?

Samya Brata Roy: No. I’m inside the fellow’s office here at the School of Advanced Studies.

Frode Hegland: Oh. Very good. Once Denise back, she had to run away. I restart exactly on the dot. It’ll be good if you could introduce yourself again. Re. I think most people have met you. And The future of text two years ago. So look what I got today. I got paper like that. Because Apple is all about. Use a pen. Use a pen. Here’s a more expensive pen. Buy the new pen. And yet it feels like you’re writing plastic on glass, which is absolutely crazy. Anthony is here, so we’re back and Peter is almost here. I’m guessing he’s having brunch. Let’s see.

Peter Wasilko: Good morning from an early brunch or early brunch. It should have been an earlier brunch to have been on cam, but I have a ton of food in front of me and you don’t want to watch me chew?

Frode Hegland: No. Absolutely not. Morning, Peter.

Peter Wasilko: But I was working on an early draft for a section to go into the high resolution thinking paper, and I’ll email that draft out to you after the meeting.

Frode Hegland: Right? Good. Cool. I’m not sure if Leon is joining us today. I think we are probably full crew. And we have someone new today, so why don’t we all do introductions? Can we do a 32nd introduction starting with Adam’s iPhone? Because, okay, you have things going on in the house. Okay. Now we can start with Fabian going by random screen here.

Fabien Benetou: All right. Hello. So my name is Fabian. And as you might infer from the mess around me

Speaker5: Originally Italian, but based in the Netherlands. So And actually.

Frode Hegland: What’s that? You.

Dene Grigar: Abn.

Frode Hegland: I think he must have been watching three body problem and something just took over.

Fabien Benetou: Probably not me. I’m on mute.

Frode Hegland: Oh, really? Okay. I think he’s. He’s listening. We’re back. I mean, we’re back to quiet.

Dene Grigar: It’s more coming this morning.

Frode Hegland: Hope so. I shall bug him.

Dene Grigar: But he was coming, I thought he.

Frode Hegland: I’ll just scream at him. That always works.

Dene Grigar: Sam? Yeah? How are you doing?

Samya Brata Roy: I’m doing good. In the process of writing my thesis. So. Good for you.

Dene Grigar: Yeah. Are you? And you’re in London right now, right?

Samya Brata Roy: I am in London right now in the University of London as a visiting fellow. Yeah.

Dene Grigar: Now. I can’t wait to see who wins the the the new media writing prize.

Samya Brata Roy: Oh, yes. I’m very excited for that. I mean, some fantastic work going on. In fact, I have a.

Dene Grigar: Student in our program, and the student and student shortlist works. Oh, nice.

Samya Brata Roy: And also I was I was telling fruit today this morning about the new media writing prize and the brilliant work that they have been doing.

Dene Grigar: Okay, Fabian. Voila!

Fabien Benetou: Sorry, I was very confused because the audio was about introductions too. It’s like, oh, somebody else is starting before me, so no problem. And it didn’t make any sense. Okay. Well so I’m a typist. I focus on WebEx, so VR and AR on the web, mostly for the European Parliament, but with others before, like Unicef or Mozilla and I mostly do it because I need to organize my mind. And for this, I find that notes and posters and wikis and with this it’s not enough. And I hope maybe naively, but I’m rather convinced that using space around me to organize knowledge is not going to solve everything, but at least help a little bit. Well.

Frode Hegland: I was short and brief. Perfect. Andrew, you are up.

Andrew Thompson: Alrighty. I’m Andrew. I’m the programmer behind one of the prototypes. And but I’m not the mind behind it. I’m just kind of the hands that implement. But it’s a it’s a really interesting job. I get to be between all of the group, get to hear all the the discourse, and then do my best to implement whatever suggestion was gathered that week. I’m usually several weeks behind. The most recent suggestion, because it takes longer for me to implement, and it does for them to come up with new ideas. But it’s been quite interesting.

Frode Hegland: I’m not going to add to everyone’s introduction, but I will add to Andrew’s, because Andrew is that rare combination of highly skilled coder and humble. Don’t see that very often. And when you say you’re not the mind or the brain behind it, that is of course, rubbish. You are really conceptually contributing to it. However you do the thing that is also very rare. You know how to listen. And for that I will in public again say thank you because it’s. Yeah. It’s brilliant.

Dene Grigar: I’m glad that he’s in my lab. He’s he was a former student of mine. He’s also going with me to Victoria. He was with me at the hypertext conference in Rome. And he’s going with me to Poland.

Frode Hegland: Yeah, and I met him first in Rome, but I had no idea about any of this stuff. So it’s before. Before slow. Rob? You’re next.

Rob Swigart: I’m Rob. I’m a writer. I am a fly on the wall in this group. So basically, I’m here to listen. I did promise I would write something squishy and fantastical about the future of VR headsets. And since I just finished a novel in which I have a character who has one. I’ve done most of the work already. So I’ll have that for you by Monday.

Frode Hegland: Fantastic. Thank you Rob.

Dene Grigar: I want to add to about about Rob to Samya. Rob is a very famous pioneer of electronic literature. He was on the advisory board of directors for Aeillo in the early earliest iteration. I have documentation of his emails back and forth to Margie Luesebrink. And he’s also been hanging out with us for the past year and a half in my lab and a good friend.

Samya Brata Roy: Yes, I know about that, that he was one of the founding members. I’m aware of the scholarship, but the other finer details I did not know I would.

Frode Hegland: It’s since. Since you are correctly, as you should sum, you are talking, please introduce yourself and please pronounce your name again because it’s so different from how it’s written and I’m still learning it. And we’ll see.

Dene Grigar: I think he’s frozen.

Frode Hegland: Oh, no. You’re back. Yes. I’m guessing you didn’t hear any of that. You were frozen for a minute. Please introduce yourself. You are next on my screen.

Samya Brata Roy: Me, right?

Leon van Kammen: Yes, yes.

Samya Brata Roy: Oh. Sorry. There was within the network is slightly wonky here. Apologies for that. Hi, everybody. My name is Shan Rui. I have met some of you in the 2022 Future of text. That happened inside the Linnean Society, and I have very fond memories of that. So I am a PhD scholar at IIT Jodhpur and currently serving as a visiting fellow at the University of London. I am also involved in the digital humanities and the literary studies. I’m a literature student by training, and I’m I’m supposed to submit my PhD thesis sometime this time next year. So fingers crossed. Now I’m in the writing phase of my thesis where I am realizing that it is meaning that I have to write the thesis, which is sad. So so yeah, that’s that’s what we I hope to learn from the discussions and and everything. I’m curious. Thank you.

Frode Hegland: I having gotten my PhD on Monday, the only advice I can give you when it comes to writing a PhD is don’t listen to a word I say. Not a single word. Can you please say your first name one more time?

Samya Brata Roy: Yeah. My my first name is Shamu. My middle name is Brato. You can say Shamu and Roy. So my first name is Shamu.

Leon van Kammen: So Shamu.

Frode Hegland: Is okay because it looks in writing to me, it looks like Samya, which is how I introduced you to my wife today.

Leon van Kammen: So. Yeah.

Samya Brata Roy: Yeah, yeah, yeah, that is the anglicized way of saying it. And this is the Bengali way of saying so.

Frode Hegland: Okay, well, we do want it to be the Bengali way, that’s for damn sure, but thank you. And then we have Well, then you need no introduction. But I know him.

Dene Grigar: He knows me.

Frode Hegland: Yeah, we all know you. And Peter get that sandwich out of your face.

Peter Wasilko: Okay. I’m an independent scholar and programmer, formally trained in law. My interests go very much to the technical. I love building complicated systems that have a lot of leverage points in them and a lot of articulation points. So I tend to come up with systems that are grounded in parsing expression grammars, parsers, programing, language design kinds of concepts. I want all of our tools to be as open ended and flexible as possible, and I absolutely despise the there’s an app for that one single tool operating in isolation for everything mentality that the App Store brought to us. I think that was a huge step backwards in the kind of systems that we had available. I’m a big tinderbox user and part of that community too. And outside research interests are straying heavily to international expositions and world fairs of late, which is a really interesting domain because it’s highly cross-disciplinary and it’s almost like the exact mirror image of what we have working with the ACM Hypertext Corpora, where we have total control as publishers over the visual media, we can have the tooling of the system generated for us. Whereas in a multi-discipline area, you don’t have control over that. You’re working with documents that you don’t own. And that’s why we need to have detached visual meta so that we can go augmenting documents that we don’t have direct.

Speaker10: Automatic voice message system.

Dene Grigar: Who’s that? We’re having a bad day with tech.

Frode Hegland: Thanks. Well thank you Peter. That was a refreshing and honest way of saying you do complicated stuff, which is good and true and worth.

Peter Wasilko: Okay, back to my food.

Frode Hegland: And also I have to, as I do on basically a weekly basis after credit year with the notion, sorry.

Speaker10: The mailbox is full.

Frode Hegland: That’s so weird. There shouldn’t be anybody with that hair. But anyway I just want you to highlight to everyone that we talk a lot about metadata in this community. And we because of Peter’s contribution, we really separate what we think should be core native. This is what thinking was born with versus the metadata imposed later. That must be easy to delete. So Peter, that’s just such an important thing. Mark. And then Leon introductions please. Mark. Oh, it has broadband is messing up.

Leon van Kammen: Are you here, Mark?

Frode Hegland: Okay. Leon, can you introduce yourself? We have a shop. Shop? Are you still there? Xiaomu?

Leon van Kammen: That is her audio. Yeah. Cool.

Frode Hegland: Just a brief intro, please.

Leon van Kammen: Yeah. Hi there. I’m Liam Leon Van Kammen. I am a webXR Fiedler researcher. And I focus mostly on art and a spec called XR fragments, which allows us to reuse 3D models and serve them, link them and navigate them, basically. And I also enjoy on a weekly basis to talk with my future of text friends about new ways of dealing with basically all things.

Frode Hegland: All things considered. That’s an American reference. Well, okay, this is all very good. So we have an agenda today. Can you believe it? Just happens to be an agenda like we have every single Wednesday. I can share a screen here as well. Yeah. Mark just texted with has gone right. If you can’t see this wave about. So this is particularly an issue for for Adam, the first one we may need to think about a potential design day for more European friendly time, because with three kids and lots of things going on, it’s very difficult to for Adam to reliably be here on basically dinnertime. It’s his time at 6 to 7. So if there are for the Europeans there, if there’s any other time that suit you as well, we can look at that. Now, this that bumps up against a big thing that we haven’t talked about. And that is I believe Fabian is coming on board more in, you know, direct work over the next few months. I’m hoping Fabian will not known this little video. Yeah, yeah. Good. So Adam will be building a. Parallel thing to Andrew that will work on the same data, include the same layouts and we haven’t really talked. Fabian. Sorry to kind of put you on the spot, but we need to talk about how you feel your work should fit in in that sense. So that’s why I’m saying if we have a time for that, we really want to make it possible for the American half of this to be there. But we’ll just have to look at realities. And we’re still recording everything. I don’t know what else to say. Any other comments on that? The potential design.

Dene Grigar: Now, right now it’s 8:00 my time, and it’s dinner time in England. Is that correct?

Frode Hegland: Here it is 415. In Sweden and Europe it is 515.

Dene Grigar: Yeah. So if you want to, is it better for you to to work later in the evening when the kids are in bed? Adam. I mean, I don’t have to meet eight in the morning. I promise I’m going to eight in the morning kind of girl.

Adam Wern: Yeah, sometimes it is, but it’s also kind of kind of putting them to bed and it’s hard to sneak out. And it is with one kid. It’s perhaps possible with three kids, the probability that all of them are going to bed. Properly is low. And so daytime regular daytime is best for me. So perhaps even early morning for me could work and it’s late for you. But yeah.

Dene Grigar: But it’s one of you guys. Work it out and just let just, just let’s not do this now because we don’t have time to haggle over time. But just give some give some times. This works for all of you. And just pass it on to Frodo and me

Leon van Kammen: Yeah.

Frode Hegland: The poll that you had Adam, for our meeting here in London. Maybe we can add some things on there. Right. So. So that’s actually the next thing. Just a quick reminder. May 30th 1st to 3rd of June, flexibly. I’m so excited that people are coming to that. Update on the invitations I have sent a few. I’m taking my time because this is the sort of thing I screw up with great aplomb. Thankfully, I copy them into DNA, so that’s with two people. So that’s happening. Are there any other announcements?

Dene Grigar: Can I talk about the invitations I’ve sent out?

Frode Hegland: Yeah, absolutely.

Dene Grigar: Okay. So, you know, I’m inviting local people, these are folks that will not be participating in the book, but they are movers and shakers in the the Portland, Vancouver and Seattle areas. So Ben Camerado from he’s head of franchise development at Wizards of the coast. Wizards makes a lot of games. He’s very involved in VR. He was on the he was a senior designer on the Halo project for many, many, many years. And then he took this job with Wizards before the pandemic because Wizards wanted to get into into VR. And so he took the position. He’s coming. I also have Max alt and his crew, somebody from his crew coming from LRS architects, which are into using VR for architectural structures and design, which I think is really useful. People from the Murdoch Charitable Trust who are giving us the space for free. We have one person, the CEO from that organization coming. She’s in control of $2 billion worth of assets. And so we really want her to come. Ron ARP, who is in charge of identity. Clark County, which is our big policy organization here in the Vancouver area, handling everything from, you know, bridge construction to traffic patterns and that kind of stuff. They’re interested in VR from the standpoint of using it for visualization. So he’s very interested. Skip Newberry is the CEO and president of the The Technology Association of Oregon, a huge organization that handles anything tech oriented. And they’re trying very hard to revamp the creative technologies in Oregon right now. So he’s very interested. And Toby Roberts, president CEO of Happy Finish, a VR game. Vr game interactive media company. He’s from England. He has an office in England, in London and in here in the Vancouver area. But he’s coming. I’m still waiting to hear from a few other folks, but it looks like right now we’ve got a nice turnout of people coming from the local community, which is great.

Frode Hegland: Yeah. That’s phenomenal. It’s just really wonderful. You know, the only criteria for someone to be here is that they passionately care. Or that we can help bridge the gap for them to get into this. Yes. That’s perfect.

Dene Grigar: Well, can I mention one more thing? The thing about the Sloan Foundation grant is that we have to expand our. You know, expand our our body, our people. And this expands it exponentially. And they also want it in the United States, the symposium, because they wanted to start to bring in people from the US and, and kind of get them interested in this project. And we also have a student, a student competition, and a lot of these people that are involved in that are coming from this group could be very easily people that could help fund that project, that piece of the project. So.

Frode Hegland: Yeah. Thanks for that level of stress there. We do need to focus more on that now. Yeah, you’re absolutely right, Denny. Right. So, papers. I’ll just read them to see if there’s an agreement or not. There is a demo paper. I’m sure we’ll think of something better as a title, but that’s basically explaining what this is. Denny has essentially written it. I’ll work with her, Andrew and Mark also. Then there is the thing that I’m headlining, let’s call it that, the high resolution thinking and journals, which will have more contributions. I know Peter has already written for it, which is great. I have tried to send it to you a few times, but it gets a bit crazy, so it’s taken a while, but please everyone feel extremely welcome to add to that. Then there is citation views. Also not final title by Mark and Adam and the inner hypertext of digitally native documents by Mark that probably has the final title. And so for the rest of the day today

Dene Grigar: Can I can I ask you a question? This is something that we’ve been talking. We talked about you and I earlier today about the papers, the high resolution paper. We’re recommending that it’s the extended abstract, which is two pages. That way there’s not any stress. We don’t. We have a better chance next year of writing a long paper. I would love to be able to see a long paper come out of this team that we can maybe win a competition with, but the two pager would be so nice to do. We can. It’s chewable, we can do it and we don’t. We don’t have to over promise anything. And this the ACM people are a tough crowd. I’ve reviewed papers for them. I’m reviewing right now. They do not. They’re not kind. They don’t. They’re not flexible about what gets in and what gets what not in. So I don’t want to do anything that’s going to get us not accepted. And so over promising would do that. Does that make sense to everybody? We’re at the beginning of our project, so we don’t have to promise them a whole hell of a lot.

Mark Anderson: Yeah. As a peer reviewer for hypertext, I can concur that yeah, it’s it’s it’s fairly barbed. The, the the judging.

Frode Hegland: So I worked.

Dene Grigar: With Mark Bernstein. I have to say this, Mark and I were on the same paper, and he he’s fussy even to other reviewers. So reviewers get into tussles over papers like that’s not acceptable. All right. Why did you say this? Why don’t you know this? Did you even read that paper? So we don’t want that kind of response to our paper.

Frode Hegland: Dini, have you seen the movie American Fiction?

Dene Grigar: Oh, I love that. Oh, God, I read the book. The book is even better.

Samya Brata Roy: I watched it on the flight while coming here. I loved it, I loved it.

Dene Grigar: The book is so good.

Frode Hegland: So on that Danny, this is what I’ve written so far, and I you’re the boss of academic because you’re the one with the knowledge. So I’m perfectly happy to shrink this down, make it a two page thing, as you say. And then But some of this obviously is going to go on our own book, The Future of Texts. So that’s another issue. And then we’ll look at next year. So yeah, we’ll do that. Right? Yeah. So after I’ve done blabbering on here There’s no programing by Andrew today, which is absolutely fine. So Fabian is going to do a demo. Right Fabian.

Fabien Benetou: Yeah, sure.

Frode Hegland: And then after that, we’re going to talk about what we’re actually building now in a greater level of detail. And then we’ll just finish off with, with the next steps. So over to The Belgium.

Fabien Benetou: So I’ll try to share my screen again.

Frode Hegland: Fabian is trying to share the screen, but just want to remind you to every once in a while, not necessarily off to every meeting. Every once in a while, look at the PDF and slack record of our meetings to see if something offends you because it was wrong. Generally, I think it’s actually quite good, but it most certainly is not perfect. I’ve had to edit things myself.

Fabien Benetou: So let me know if you can see my screen, please. Yes. Thank you. Yeah. Thank you. So I don’t really remember why I started this. But I remember wanting to have basically a menu and that the menu, because I do inside that environment, I can teleport around. If you can see the top right corner, I can go there. And I need to be able to have some of the code there or the buttons, the interactions. So one of is always on my wrist, like a shortcut basically that I can drag there. But it’s not enough. And some of the code is floating there I can use, but sometimes I want a more traditional menu. So I started to reproduce one that I’ve seen. I I’ll put the link on GitHub. I think Igor made this and basically here, when I pinch in space and there is nothing around like no code or no nothing to grab and move around, it makes a little a small cube appear and the opacity changes with how long it’s been. So basically if I click and I let go, it’s going to appear and disappear right away. But if I hold it for about half a second, then it’s going to make it appear so at that point it was not usable yet. This is the same, but with proximity. So if I get close to that cube, you can see the wire frame and then away from wire frame. And then if I’m close enough, as you saw with the green part then I do something. It’s like the button has been pressed if you want. Nothing.

Fabien Benetou: Nothing. And if I’m close enough? No. Still nothing. Well, if I’m close enough, it change the color of the button. And newly finally, I do it again. But then instead of having one button, I can have another button. So a submenu. And then, then you can go, like, recursively and have a submenu of a submenu. And it’s, it’s relative to me. So when I do the first pinch, but it’s also spatial, like I can still click on it and then extending it to a lot more buttons with a couple of colors. So when I release, when I go on the button it’s going to show that it’s selected like it becomes full, not wireframe. And when I release if I release on a button, it’s going to apply that action, which is here, just as an example, is changing color. And lastly just before this meeting, I just wanted to see how many of those that could realistically fit. So I took 140 colors, the CS colors, and I just stack them up. It’s too close. So it works kind of, but not properly. So you see, I press blue here and I get the wrong shade. So I think when you release basically the finger moves ever so slightly. And here at one cubic centimeter precision it’s, it doesn’t look like it’s good enough. And in term of performance, having 140 buttons that change color and opacity again and again and tracking, it’s a little bit borderline. It’s usable that you can you can fill the lack basically. But with about 50 buttons. It’s it felt pretty good, like pretty responsive. But

Dene Grigar: So I have a question, Fabian. So that’s interesting. I’m interested in the How. How much space is required for interactivity. So you’ve got 50 buttons at work and you’re saying it’s the boxes are how large are the each box a centimeter square. Yeah, yeah. Okay. So. So the proximity you can put two of those side by side and touch those as long as it’s one centimeter square shape.

Fabien Benetou: Yes. Actually touching them works even with one centimeter. And they’re literally next to each other. So that works. The problem I have right now is like when I output a bit closer to the camera, when I let go the position of the pinch when it’s released, I think it’s one of the two fingers. And basically if it’s a centimeter, then I pinch outside of the box or away from it. So but if I, if I just hold my finger like this I can go. I mean, tracking for now works sub centimeter I would say and it’s really reliable. Like if you see the highlight for the color, it is like in the middle of the pinch is the right cube being selected. So for sorry if it’s not clear going through pinching like this centimeter is good and letting go for now. I don’t know how to do this. So if I let if the action is done when I release the pinch, then I space them out by about three centimeters. And then if I do this, then it’s going to always or I guess 99% select the right one.

Dene Grigar: I’m asking because the Apple Watch is an interesting model, right? Because when you’re trying to log in the the little squares where you’re touching your fingers to, to to log in are tiny, right? My watch has a crack in it now. So I’m finding that I have to be very, very. In the old days, I could just very quickly hit on those hit on those squares, and I can turn on my watch. Now have to be very particular about where my finger is. And so there seems to be something going on with the with the scratch, with the crack. But this is a very small size, right? That’s a tiny, tiny size. So it looks like we’re able to get pretty close to what the Apple Watch is allowing for.

Fabien Benetou: So I did not I did not go lower. And maybe it’s I mean, it’s definitely possible. One thing that would be probably interesting in term of accessibility is to make this a parameter, namely the size of the cube and the spacing of the cube. I mean, I just use cubes because I like cubes, but easier for me. But but that could be a parameter. So for example, somebody who doesn’t have like kids, for example, young kids, they don’t have the dexterity yet. So you don’t want some fine precise control as a requirement. And maybe I don’t know, you’re in the plane and it’s shaking. So I think having this as something flexible but overall, my, I mean, I’m it’s it’s a bet, of course, but I’m rather confident that between, for example, this or that the curve of progression as long as, again, we were hooked on quality cameras as cameras will improve, tracking hand tracking and any kind of tracking will improve. And I guess we’ll go, I don’t know, maybe submillimeter. I have no idea.

Dene Grigar: Well, one more question I think would might be thought to think about is that maybe it’s not squares because squares fit so close together. Maybe it’s triangles or circles. Because your your hand, your finger would have more space. It would not be as close, it wouldn’t be butted up against it as tightly. Right. If you put circles side by side, you have more space for your fingers to be a little sloppier.

Fabien Benetou: Yes yes indeed. Yeah yeah yeah, yeah. I’m not sure what the ideal shape is it. Is it? Yeah, I guess it depends. Really. Here. The goal was having interactions. That is like always relative to you. Again, you can move around the space freely, either teleporting or moving around, and then the menu is always nearby, like wherever you pinch, as long as the camera tracks it, then the menu is going to open and then sub menu and whatever. But yeah, what the shape is going to be or even, I think fraud you shared earlier today an example of a menu that looks like the things in a game that you need, like I think it was a backpack with items in it. So that that would still be the same kind of interaction, like you pinch or you activate it somehow. And then you pick one of the things and you can reorganize because here the how do you say the layout of the button of such a menu is determined? Like just pretty much to test it, to make it work the simplest way. But you could reorganize the button yourself. So, for example, to go back to your question about how the balance between how close to each other there might be then you could say, okay, I want actually, the color, for example, doesn’t make sense. I want to rearrange those buttons myself. And then I you store the layout that the user has designed for the buttons.

Frode Hegland: Trying to find that tweet. Yeah. But yeah, this is very impressive, obviously. I have questions, but they’re more unrelated to the next section. So anyone else have questions or comments before that?

Adam Wern: Yep. I have So. Fabian, I’m struggling with the same problem. As I’m displaying Mark’s full data set with 1100 papers or small thumbnail images. And I’ve been trying to have it as a kind of not not a dashboard, but some sort of control panel where you have them at reach all the papers, that small, small thumbnails, you press one and get a as an alternative to the laser pointer up on the wall far away. And I think the pinching problem is something we have to solve together. Both for the laser pointers. There is this idea that you. Whenever you aim at something for, let’s say, a third of a second, that that one, that item is selected in the background invisibly. And whenever you pinch wherever you are, you trigger the the the item you had in mind a third of a second ago. So kind of a delay, and I think that would work quite well. It will give it would work against that kind of that pinching is not that is, is not a well defined thing. When you pinch, you always move your fingers a bit. So I think we could work around that a bit by having that kind of pre-selection, something that always knows that what you’re pointing at a third a second ago.

Fabien Benetou: Yeah, yeah. No, it’s it’s something to experiment. It’s It’s kind of frustrating. The maybe also studying a bit how it’s being implemented in in whatever device we choose to use. But, yes, some kind of either delay or threshold or because again, the, the selection before doing the action somehow is close to perfect or it’s exactly what is expected. But the release of the pinch maybe it’s I’m not sure.

Adam Wern: And I think it has gotten worse in the quest two. It felt a bit better because it was kind of calculated, interpolated. Now, when they have higher resolution and actually track fingers better, these problems pop up because it tracks it too good, so it doesn’t understand what what you intend to anymore, but just what your fingers are doing.

Fabien Benetou: It’s Indeed. And when I did the very first step, the cubes were ten centimeters side. And of course there was no problem and they were super far away. But then I thought, okay, that’s that works. So that was a good step. But it’s probably I mean, except if you want to do exercise, it’s probably not the kind of interface you want. So I start to go low and lower. Again, it’s kind of a guess a bet. Like, do we want to invest time on having the smallest, most precise menu and thus find this kind of. Maybe hardware or OS limitations? Or do we say a bit? Like I said earlier, we know or we’re very confident it’s going to improve. It’s not our problem to solve. The hardware and the OS are going to solve it for us. So we just say, okay, three centimeters, sphere radius now or even less actually and good enough for our use case or if we really need, because we have 100 papers to show, we dig into it. It’s kind of a strategic question. Is it a worthwhile problem to solve? Not just for the beauty of the technical challenge, but because we can uncover new usages, and I’m not sure to be.

Dene Grigar: I think so. I mean, I think we don’t want to tie ourselves down to one way to interact with the environment. And the way we’re going to write up the contract for you is that you’re not creating a necessarily a parallel environment, but you’re expanding on what we’re doing and taking in different directions to kind of prod what’s possible. I mean, that’s what we’re supposed to be doing, right? Looking at what’s possible in an open source environment using XR. So that sounds perfect. The more of us doing this is the better, right? It doesn’t matter if there’s ten of us going in different directions. Long as we’re using the same kind of source files and it’s open source mentality, I think.

Fabien Benetou: So on that point, specifically, as I tend to do, I posted like a little video, but I there is a link to the source on the how I’ve done it basically, which I think is relatively straightforward. So it’s all it’s open source already basically for this kind of implementation.

Frode Hegland: So a couple of things. First of all, is hearing this level of discussion is ridiculously wonderful. You know, I wish Randall was here because for so many years we talked about things and XR, and this is such a bizarre reality. Who would have thought that that interaction issue would happen? That literally would kind of stick to your fingers, so to speak. It makes sense. It’s an interesting issue. It’s just so absolutely wonderful to be tackling these issues. I want to say thank you for that. And also no, I’ll, I’ll hold my fire until the next bit and bring that up to you then. Fabian, please go ahead.

Fabien Benetou: So please let me know if you can see my screen.

Dene Grigar: Not yet. Not yet.

Frode Hegland: It says she started sharing, but it isn’t actually.

Fabien Benetou: I think it’s literally once per possession for me. I can’t share it twice, so I’ll put the. That’s fine. I’ll put the link in the chat. But then please do do open it. I was going to by sharing my screen, forcing you to do it. So it’s Oleg. Frollo and I was wrong with about Igor before, and the circular menu is kind of what I was going for, so I’m not there yet. It’s like literally yesterday evening that I started to tinker with this, but I think he has a lot of really good ideas. And initially I thought some of those ideas are let’s say tricky to reproduce in Pixar without like a UI, UX framework. And now I’m thinking with some of the architecture or helpers I have it it became again, like maybe a couple of dozens of lines of code, like 50 or so so relatively compact. And I want to say easy. So, yeah, I think he has a bunch of ideas that are they’re really they’re not just concept in the sense that he did implement them. And I want to dig a little bit deeper. And now, yeah, that the basis is on to go a bit on how should it tilt relative to the kind of gesture? I think it’s it’s it should be a little bit easier now that this. But I recommend you to check his work if you didn’t do it yet. I think it’s I think original ideas.

Frode Hegland: I think we need to. I think we need to invite him to this symposium and book. This page looks really good. Yes. So thank you. Anything else? We’re not dropping your demo, but I think it would be useful to move on to the next stage. And so then.

Fabien Benetou: One last thing I did. So I did share the source code, and I did share Oleg’s URL directly in the source code, so that I don’t want people to think that some of the ideas implemented or mine when they’re not. And I think overall the provenance aspect is fundamental, doing open source and otherwise. So I wanted to make this very clear. That being said, I want to I’m rather happy with the result, but it’s again, it’s a stepping stone for more exploration. It’s like, again, going on a hike and then you on top of one hill and then you see a dozen of hills behind or in front of you, rather, and then you’re excited because you know that behind those there might be even more hills. So I think that that’s the kind of excitement of open source and provenance and being able to explore.

Frode Hegland: Do you know this guy, Oleg?

Fabien Benetou: No, but I, I had a chat with him once or twice. Not I don’t know him very well, but I am happy to try to propose to him to come do a demo or an event or whatever you think is best.

Frode Hegland: Please do introduce him to us. Absolutely.

Adam Wern: It’s actually in London. Road. It’s in your neighborhood.

Frode Hegland: That’s what he was about. So. Thank you. Maybe we should ask to see him for lunch on the Friday when you guys are here or something.

Fabien Benetou: That would be perfect. And I think what I will do is send him this video as soon as it’s published, because then he can see directly how much we appreciate his work and how hopefully the discussion together would be fruitful.

Frode Hegland: So you’re saying we have to be coherent for the rest of the recording? Okay.

Fabien Benetou: I mean, he’s not going to he’s probably going to give up on us in a couple of minutes after watching this very part. So no, no, just just for now.

Frode Hegland: Yes. Right. Okay. Let’s move on while keeping this firmly in mind. And the first bit should be obvious. It’s for clarification. And that is how I see our ultimate demo. Again, I think we’ve already agreed on this. A and this is probably a bit longer than we want each person at ACM hypertext to do. But for the sake of clarity, I’ll do the whole thing. Someone sits down on a computer, traditional computer and does some work. In the beginning, it will be probably our reader because we can control it. We own it and ask a favor.

Dene Grigar: Can I ask a favor? Can you record this piece of the of our presentation of our meeting today so I can have this for the demo paper?

Frode Hegland: Well, I’m recording everything, and I can try to extract it, but actually editing the zoom files is a real pain because they balloon. If we do any editing. It’s so odd.

Dene Grigar: Okay, okay. Never mind.

Frode Hegland: No, but hang on, Dini, what we will do. We’re now 45 minutes in, so let me do this. Claude, at this point of the recording is really important. Can you please highlight this when you do the transcript analysis for us? See if that works.

Dene Grigar: Okay, good. Thank you.

Frode Hegland: Either way, it’s 45 minutes in. Right? This is how I see it. For clarification, someone sits down on any third party software, starting with our reader. They do things is all very fun. And then they decide that in the library view they have a map, they go into a map thing. They have concepts, documents, all that good stuff. It’s not enough on their computers. So they put their headset on and they go to the same software that also has a native visionOS component, such as reader, which is our experimental thing. In there. They get a huge screen, just a normal flat screen, just massive. They do some work and that’s great fun. But then at some point they click a button or do some interaction, and then it opens up in Android’s world and it’s really key. Excuse me? That it’s the same data and the same layouts. So the JSON that Andrew has so far, we need to work to make sure it works spatially in both directions. That’s really important. They go into Andrew’s world and Andrew’s world is a cylinder. It is a gray background. It is a calm, peaceful, 360 degree swivel chair environment where the user can interact with everything. Now, how are they interact if it’s going to be laser pointer or finger, I don’t know. We’ll have to experiment. My personal preference is that it’s close enough you can reach, because being able to touch it to me just works a lot better with the experiences I’ve had than having the kind of a gun thing.

Frode Hegland: And one thing we discussed a bit on Monday is that maybe and again, this part of it is purely for testing. You know, we use our main hand for touching and interacting. We have the other hand. Maybe we do something like do a fist, and as we move it, we shrink and expand the whole cylinder so it can be far away for a view. And then we do this and it comes close to us very much like what Brandel did with Bob Horn’s mural. That’s for discussion later today. That kind of thing. Now here’s the other key thing. We currently have the idea of a control panel in underworld. Maybe because we’ve been talking about it. We put the controls on the non main arm so that the user can touch something there, and then the same data, the same view opens up. And what Adam is building. So Adam will be building a completely different interaction, more experimental. But to repeat same data, same layouts has to be stored. Plus other things. And then one of the things to discuss today is having yet another way to tap on this arm thing. To take inspiration from Fabian’s block, go into Fabian World. And one thing that I’m thinking now initially as the. As me. Is that what I think the differences will be is the stuff that I’ve been working on.

Frode Hegland: An author is very 2D and it’s fine. Still a lots of things to discuss. What Andrew is working on is very much more dimensional, but what Fabian is doing is much more computationally interactive. So I just thought of as marketing term to call it that smart smart mapping nodes. So all the little things that are in Otherworld, underworld, throat world when they are in Fabian World can have more computational muscle. So we can start experimenting with whatever the heck Fabian wants. But inside, let’s say each. Okay, this is very important to remind ourselves. Our default knowledge workspace is a journal slash proceedings. The one we’ll be using for hypertext will be the actual hypertext proceedings that’s been decided. So that means that we have a limited set of knowledge. It’s not going to be all the knowledge in the world. Right. So that means that the least we’re going to have the paper titled Author names. And then we can discuss what to extract in terms of keywords, names and all of those things. That is very much for our discussion. But having that metadata in each node and having each person being represented as well should mean amazing things in Fabian World. I haven’t really thought of that directly yet, but if we have this control on the arm, where you go between these three worlds and the same data with different kind of interactions, holy moly, how amazing can that be over?

Dene Grigar: I wrote in here a photo that I’m imagining that the way we can frame this for Sloan is that since we’re building for academics, it’s a very general term. We didn’t say scientist humanities people, but each one of these types of academic fields requires different environments. I’m imagining that humanities people are going to want what I want, which is that clean slate environment. Right? But someone else in computational areas may want the more computational environment. So I think this is a really good way to not just offer Sloan one, one way of doing this, but multiple ways.

Frode Hegland: So I fully agree with you, Danny. And I would like to add to that. That is with all loving respect, we don’t know what we want. We know the beginning, and that’s really good. But I can imagine that if we do this right, that you will also very much work in Fabian world, because one of the discussions we had in the group a couple of weeks ago was that you should be able to embed, let’s say, an entire LM as one node. We have an example of a colleague of Mark and I, Paul Smart, who was a philosopher who has been building LMS of famous philosophers. So a in theory at least, we could have one of these boxes be representing a point of view or a person or a process or whatever it might be, so that it is not necessarily Fabian level coding, which is way beyond me also. But also you bring in these really, really clever Lego blocks. And when you go through your humanities work or whatever it might be, it’s like having supercomputers moving around in the space and helping you organize it. I don’t know what it means, but I’m so excited that we can experiment.

Dene Grigar: Well, I think also, I just wrote in here that twine is something that we teach in our program. It’s, you know, very, very, very, very easy thing to teach to students. At the same time, it has many different environments. Sugar cube, you know, all these different things. And some of these lend themselves well for game development, some just for mapping. And so when you are thinking about making a twine project, you look at the different types of twine environments and you pick the one that fits your affordances and the constraints. And that’s what we’re I think, what we’re talking about here. Don’t make sense. What I’m saying. I’m saying it makes sense. There’s already a pattern for this.

Frode Hegland: So. Yeah. No. Perfect I see here. Mark was talking about what? Proceedings? I think I should check with Wayne Graves if we can have HTML versions of this year’s. I’m.

Mark Anderson: I don’t know how realistic that is. Well. But as I’m actually the on on the committee for that, I can probably give you a reasonable answer. I the the reality is you won’t have them until fairly late in the day. I think it’s. Perhaps over promising. So my, my, my sense on this is you do it twice. You build one that you know works. If we get stuff late and we’ve learned enough from the first process that we can just shim in a different data set, that’ll be fine. But I would almost guarantee that if we if we went straight for the second we’d fall. We’d fall at some unseen hurdle. And the good thing is, anyway, if we, if we, if we do one for the only, the only proceedings in the corpus that we have that has HTML, which is 20, 22, at least we know we can use that, and we’re going to get useful insights from it.

Frode Hegland: I. There’s a lot of heads nodding with you.

Mark Anderson: Yeah, sorry about the noises off. I don’t know what they’re doing outside.

Frode Hegland: I’m just replying to Peter here.

Adam Wern: So Mark, what happened to 2023? Is there no HTML for that? So it was 22. Just a kind of test balloon thing or it’s the other way around.

Mark Anderson: So basically around 20, 22, 21, 22. Com conferences and journals that submitted their information via the so-called Tap system. It’s just an acronym. I don’t know what it stands for in ACM. It now spits out HTML or it did, but they didn’t really publicize that. And last year for, well, because reasons the committee didn’t use taps. So the papers were made a different way, which meant that the, the, the sort of free ride along of the HTML didn’t occur, which is a pain in the backside, because I have looked at putting my my paper into HTML. Doing doing links for two 205 references is a bit much just at the moment. But anyway, that that’s the explanation. So in fact, to a degree I would expect HTML the the opportunity of HTML as well to be a default going forwards. So it’ll be more of those, you know, people who use a different, a different method. The problem being, especially with the committees, they’re all ad hoc every year, all volunteers, and there’s an element of information that passes along, but all sorts of things.

Adam Wern: So follow up on that As I understand it right now, HTML publishing is kind of not the standard. Well, it’s an HTML, but there is no semantic standard, more than getting a few bolts and links and so on. And all the other things like the abstract references are not like coherently marked even within ACM or under, certainly not between publishers. Is that correct? I don’t.

Mark Anderson: Believe so. I’m absolutely sure if you dig around, there will be any number of people who’ve had a nibble at it themselves. Is there a standard?

Adam Wern: So basically we have a thousands of standards here to work with. If we’re more kind of so we have just as papers can have a specific format, but in typography does not have a specific we have the digital equivalent of that right now. So if I may.

Mark Anderson: Quickly this is the this is basically the provocation. The other paper I’m writing, which is we just don’t have the tools to make this, i.e. that you can’t, as the author, deliberately construct easily within your writing space that that extra skein of metadata. So the useful thing we have with HTML at the moment is, as we found with our experiments in air, it’s it makes it easier to get the text in and out and get the sort of semantic spine of it in terms of paragraphs and things, what it doesn’t do, because we haven’t put it in there, and there isn’t yet a standard for a standard set of sort of markers for that is marking up, in a sense, the inner hypertext want of a better word of the document. So, as you say, the abstract, so the definable things. But one of the one of the examples we’ve given is that you might have part of the text that relates to figure one and table one. So, for instance, you might want to be able to mark those as well. An object that within our malleable XR space that we could we could sort of take that out and do something with it. But as far as I’m aware, there certainly isn’t a standard for this. I would be amazed if there aren’t things around, whether they’re being done as, as papers or more likely, just individual experiments on people’s blogs. Thanks, Fred.

Frode Hegland: So in my discussions with Wayne, who is in charge of ACM publishing, the technical side of it, the weird thing is they have all of this in an XML database, so all of this is there. The question is how to export it. And because they are being nice to us for several reasons I don’t think it’s unreasonable for us to say, first of all, give us last year’s and the format that you guys want. There’s nothing wrong with asking that they should be able to do that for us. I.

Mark Anderson: Are you absolutely sure? Because their public statement is that it’s only papers generated through taps that have HTML available or essentially in a form that they can easily run out. They will have the information. And the other point being that I don’t think taps at the moment is generating the sort of extra metadata I’ve just discovered. What they do have is they will have each paragraph as a paragraph in XML, that kind of thing. So structurally they have the parts of it. But I don’t I correct me if I’m wrong. I don’t think they are adding metadata in that the authors aren’t aren’t providing.

Frode Hegland: Yeah. Okay, so one thing we’re confusing ourselves a little bit about here is what data we’re talking about, because we’re not primarily now talking about augmenting the reading of a specific document. Of course, a document should be available in our system to be read, looking roughly like a PDF at the minimum, you know, just to kind of keep keep with the program. But we are primarily talking about interacting with the titles and the basic metadata for documents. So, you know, I’m this is also very much for discussion, but the way I see it is you have the title of a document that author names all of these things on this huge map around you. You should also at least have the abstracts and some keywords and stuff, so you can do powerful views to see how these things relate so you can decide what to read. Once you get inside the document. Yes, I would also like to have paragraph level not only addressability but also view ability, you know, focus on this paragraph and stuff. Absolutely. No question. I don’t know what they have available for us. I will write a quick email now and ask, but at least we will be able to have the the top level stuff. I mean, like last year, I’ve already done that. It took me a while, it took me two days. But we do have the. The abstract and blah blah blah basics.

Mark Anderson: Well, I had I had all that done at the conference last year in my Tinder box document. Yeah.

Frode Hegland: So yeah, you have it as well.

Mark Anderson: So I could have run that out for you then at the conference.

Frode Hegland: Yeah, but I did. I stuff that you would wince at.

Mark Anderson: No that’s fine. What I mean is but the source data, the stuff you. Sorry I this wasn’t a competitive comment. What I meant was. But I mean and I will endeavor to do the same for the current the current conference this year because, for instance, probably for the stuff that I was working on, there’s a more realistic chance that we might be able to include you know, the, the, the current stuff in.

Frode Hegland: Right. So, so let’s just say that Mark and I in dialog were in charge of getting everyone the best possible data for this. Because this is a discussion that comes up often. So Mark, we will do what we can to to get it to them. Right. Yeah. So then the next thing that has to be designed by the community is the format of how this should be shared amongst the different environments. But we already have the beginning Andrew has done that. But we need things like x, Y coordinates and all of that stuff. So what probably will be best is if Andrew does what he thinks is best while talking to Fabian and Adam in case they have some input. And then we just test it, or what do you guys think? Yes, Andrew, I would love to hear from you.

Andrew Thompson: I was trying to find the react button, but with the new zoom layout, I somewhere I believe in the layout I made. I have a section called custom or something along the lines. I forget exactly the word I used. But it’s it’s clearly set up for basically just custom data that doesn’t fit into the classic tagging document information. And that is where I plan to put the coordinate system. It makes the most sense to me, but. We can of course put it anywhere because it’s JSON. And that was just the layout I made. Of course, I know Fabian is using a different layout now.

Frode Hegland: That’s good, but already some issues that we need to address have come up. One of them is while having coffee with Christopher Gutteridge, who was related to our project last week. He ran this little thing that compressed the titles of the papers so that they’re more readable. Some of them are very, very long. So to have a computational way to make them shorter is useful. And the reason I’m mentioning this now is it also relates to what do we mean by coordinates. When an author we try to change the map size the text. Just making it bigger or smaller became a bit problematic because we had to decide where we’re scaling it from. Which in this case means where we are, what the center of the object is, and what the object is. So I really look forward to the experiment with you guys just figuring that out because it’s it’s it’s a bit tricky I think. I have a feeling Adam has thoughts on this.

Adam Wern: Yeah. For the moment, I think we should just do do whatever that we need to get more understanding of what the data we want to support in order to build the format is extremely, extremely hard to do a proper data format beforehand. It’s kind of emergent, and usually it gets best when you have a couple of 2 or 3 systems that you need to kind of find a standard from within, or else will you just copy the internals of a specific application? That’s fine for our case. Right now, as I understand it, we’re working with custom data. The data set is kind of highly bespoke. We can’t just take any not even any ACM data, but it’s very much a very highly bespoke system. And also author is highly, highly specific author reader. It’s a highly specific Mac system with kind of a history to it as well, where you build up data. So at this point, I don’t think we can sit down and design so much beforehand. We need to actually just get the data down there, and then in a future iteration, we could perhaps do it more. More refined and better looking. But I think it’s very hard right now.

Dene Grigar: Can I say something? I actually like the idea of trying different, having different. Interactive environments and not putting a lot of energy into. What they look like. So I’m agreeing with you, Adam. And I think that it’s going to be great to have you and Fabian. Go different directions in Andrew and build out the possibilities as opposed to. And then how the text works in those environments. Right. That’s that’s what we’re doing. And not worry about the rest of that right now. So what what’s the interactive elements like? They do not have to be the same in each environment and then decide later. Once we see once we do testing because we need to do some usability testing at the conference, let people test to see which interactions make more sense to them, and then we can start to standardize our own standards.

Frode Hegland: Yeah, but we will still need to have the elements be known across the different environments as being the same element. So at some point, yeah. Yeah. No. Well I think relatively soon. So if you move something to the left and Andrew’s view and then you go to Adam’s, it should move the same to the left. I think that’s going to be a very, very important aspect of this. It may turn out not to work, but as a data transport and open standards thing, I think that’s crucial. If it turns out to be absolute hell to implement, you know, we can revisit it. I do think it’s pretty core, though.

Dene Grigar: You know, could I say something for everybody just to respond? I agree with you that that the ultimate product should be that. I’m just wondering if what we’re trying to determine right now is what’s the best way in which we interact with these, with text in these environments? That’s that’s the big question. And so if we have different. Different opportunities Andrews Adams bobbins and test that at the hypertext conference. And somebody might say this is actually easier. This is actually better. And then we find out that that’s actually not hard to develop. Then we can implement that across. Yeah, absolutely.

Frode Hegland: Absolutely. But in terms of the actual data. Right. That’s. That’s really, really important. So and I don’t know if we’re over discussing it. So let me just ask Fabian, Andrew and Adam. Do you guys feel that the coordinate data will be clear enough that you can share amongst each other, or do you think that will be a major logistical issue to work out?

Andrew Thompson: It’s. Well, the depend, right? That’s big. Asterix. For coordinates for me, since I’m working in, like, a cylinder. X is not actually position, it’s rotation. Based off of the center of the world. That shouldn’t really matter that much because you can just interpret it differently. But I do use that system. So like y is up and down. Of course X is rotation around the world and then Z would be viewing distance. So you probably don’t have to take Z at all. Z could be something that all of our systems just do their own thing with. But that’s how I’m working with the coordinate system.

Frode Hegland: Yeah, yeah.

Adam Wern: And I wonder how how often we want to have kind of positional data transferred between the system or used in both systems. If, if we move something in in the cylinder space Android cylinder, will that really translate to, let’s say, a free form all over the room or AR experience, which I try out probably not. So it’s more about preserving that data. So if you open it in my system, it should really still keep all Android Android’s coordinates in the cylinder space. So so nothing gets lost in translation. So it really saves for all the different interactions. So I think that is a main concern. And if we want to translate things we can do that programmatically. That’s that’s doable of course. But the most important thing is to preserve the data and not throw things away if you go back and forth between.

Frode Hegland: Yeah, that’s absolutely crucial. And this is so important. We sorry about all the yellow hands. We’ll just address this and then we’ll go go to hands. Yes we need to be able each environment needs to store its own data. Right. I would say as a visual medic meta appendix that can be deleted or not or used or not. But I really do think that the notion of spatial hypertext means that the user may have certain logic for layouts. So when we go between environments, it should be preserved unless the user has expressed a reason for it not to be. Right. So I think as a basic thing, it’s really, really important. But let’s say Fabian does something completely off the charts. We should still be able to take the data in and out, but. Are you thinking?

Adam Wern: I’m trying to understand here. Are you thinking of kind of a 2D mind map that it will look the same if you move it from the cylinder or to my wall, for example? Wall. Is that what you’re thinking about, or are you thinking about kind of 3D, full 3D positioning?

Frode Hegland: I am thinking about both, but I am thinking and this is what we need to define. Probably slap me in the face. If I’m wrong, we will have an initial 2D layout with a with depth to it, and of course an underworld that becomes a cylinder then, but it still has the central point. It’s the central point and has depth going out and different things. This should be entirely walkable. And Adam and Fabian world, of course. But the thing is, it takes time and effort to do these layouts. And one of the things we discussed, of course, is the user should be able to save layouts and save layout criteria. So there’s nothing wrong with going from Underworld to Adam World, and it looks completely different if that is what the user has specified. But if the user is still doing the basic stuff, you know, having spent all this time doing this, that and the other putting on the headset, it shouldn’t be a mess, that’s all. Okay, lots of thoughts on that. And we will go to the Yellow Hands. So yeah. Samuel, I will try. Yeah.

Samya Brata Roy: Thank you so much. It has been great to listen to the discussions. I mean, I am not privy to all of the conversations that have happened thus far, but one thing I thought to chime in for a little bit is that when we were talking about shortening the titles of papers I want to bring out that bit for a bit. Instead of shortening the titles for papers, maybe you can focus on getting certain words of the titles in bold or increasing the font of those words, and the others remain out of focus or smaller size in font so that the entirety of the title remains but the reading part that becomes easier. So I don’t know if that at all makes sense in the design environment that you are talking about, but you can still have the space shortened up with the font size reduced of certain words, and certain words may be bolded up and the font size increased. So I’m just getting my thoughts out there.

Frode Hegland: Yeah. Thank you. Useful and important. These are exactly the kind of things we need to experiment with. But Mark.

Mark Anderson: A couple of things. First of all it’s perfectly possible that this, this idea of augmenting the HTML with something extra we could probably do for this year. I could have a look at that. Probably one of these these current papers are out the way. Just just just by way of a, in a sense, a confected demo, for instance, I, I’m quite, quite interested with this idea of being able to find a part of the paper comprising maybe noncontiguous bits of text and images or tables that basically are a whole. And being able to show that these are these could be addressable objects. And obviously what you do with them, we won’t know really. We need to be able to get hold of them before we realize whether it makes sense or not. We experiment with that. I minded also that there’s a actually, Denise just stepped out, but I’m reminded there is going to be a summer school or there should be. There’s hope to be just before the Poznan conference. So that’s the Saturday, Sunday before. So there’s an opportunity possibly because we’ll have a number of grad students there to do an experiment. I mean, if we’re, if we’re ready for it and have all fours to sign off and things, we actually could possibly run some literally hot experiments that we can we can report on in the main conference. Just an idea. It’s not saying we have to do. It’s something we could do if we’re ready for it. It’s spatial hypertext, Scott mentioned. It’s worth bearing in mind that there are two sort of conflicting, well, not conflicting. There are two alternate sort of conceptions of it. One is where it’s algorithmically laid out and the other where it’s manually laid out by human.

Mark Anderson: The latter is the one where you probably definitely want to hold on to where something was, even if you never use it again, you don’t know. You don’t need that position until you do need it and you can’t find it. Whereas algorithmically it will go back to ostensibly the last state or whatever state you saved in because the algorithm should put things back where they were before. And lastly, it occurred to me that the different things you may be doing in terms of the data you need to pass across is sort of sort versus position. I think. So. So part of it is putting things different from something else. And allied to that is sort of sorting and set making. You maybe wanted to take something out of this group here and put it over there, or, you know, whether it’s a copy or the or the or the only piece on the board, as it were at the moment. Again, is, is for exploration. But those just seem to activities. I just quickly say, as Denise back just occurred to me, the summer school is an opportunity to possibly do an experiment if we’re that ready, because we’ll we could try some of the things we’re talking about. We’ve got some additional test people to the extent that even when we get to present at the conference, we may be able to say, well, we actually tried it with ten people. And here are some of the stuff we found. It entirely depends on whether ready or not. But just something to bear in mind that I hadn’t occurred to me until the other day. Thanks, Peter.

Dene Grigar: That’s a great idea. And let me mention to Seamus that we have a summer school that is providing full support for people that want to or graduate students that want to study hypertext over the summer. And then at the conference in Poznan, Poland. So let me put that in your ear, let you know about it. Okay. But Mark, that’s a great idea.

Frode Hegland: And Peter.

Peter Wasilko: Okay. I like to get as abstract as possible, so I don’t like the idea necessarily of paying exact coordinates. We might be in different size spaces. I might be interested in having something in a two meter cube. Someone else might be in a fully immersive mode, so. The exact coordinates might not be as relevant as the relative relationship between elements. Also, in a given visualization, I might want to be mapping the coordinates as I’m moving something around to an attribute that’s stored in visual media so that it wouldn’t be x that I’m moving around so much. It might be that by dragging this object close to a mark centroid, I’m indicating what degree I think it belongs to that set. And I might want to have a set membership level be bumped up in the visual meta for the document and not care where it is x, y. In the space that I’m working, I might be constantly changing what the dimensions of the points are arranged in mean, and I want to be working in terms of binding functions, associating attributes to where they’re positioned spatially.

Frode Hegland: Yeah, that’s really important. So to paraphrase you, to see if we’re on the same wavelength. Relationships of different types matter. Yep. So, Fabian, you wrote here. Didn’t get my turn. Sorry. I wanted to say that I shared the csl JSON with additional metadata, starting with position a few weeks ago. Is that something you’ve had a chance to look at, Andrew.

Andrew Thompson: I did glance at it. I haven’t, like, gone through in depth. So I saw that it was there. My system doesn’t use csl JSON, so I currently can’t do anything with it. But if we want to try to adapt this to. I can try to make changes. I’m not against making changes. I’m just saying where it’s currently at. Mine just uses JSON because that’s what three.js exports. So I just, I use that export and then do some fiddling on top. Most likely I’ll redesign a save load system once we get to the map view to make it because it doesn’t need as much information. So I’ll probably. That’s what’s going to get stuck into the library view. So that’s. Well, I think that’s what we called it at the time. This we have a lot of changing terminology recently. So getting caught up on on what means what at this point.

Frode Hegland: Yeah. No. Well, this is crucial. This, again, makes me very happy to have this discussion. So Andrew, could you please explain to us what you are currently using for position data? In a sense, in a kind of a sense that you’re talking to Adam and Fabian and ignore, ignore people like me.

Andrew Thompson: Okay, so what I’m currently using doesn’t really matter though, because it’s for saving and loading the workspace, which nobody else is doing. That’s not what’s important. The we’re talking about all sharing, saving and loading the map view. Correct? Or am I completely on a different page from everybody else?

Frode Hegland: The page you should be on is hang on a second here. Just to make sure we really are on the same page, you can see the screen, right? So yeah. So what I have here is a list of all last year’s papers. I go to the map view and here it is. And it’s all a bit of a mess. So I can do things like this that most of you have seen. I’ve now choosing categories of things like only document, hide them, bring them back and then I can do layout things. So when you’re talking about a workspace I would say that that relates to this. You know where these things are. It also might relate to other things, as in external elements, looking at what these things are. Ideally. Oh, okay. Yeah. Leon, look forward to seeing you on Monday and see how you want to work with this as well. So ideally. Okay, we’re calling this currently high resolution thinking. Right. And that’s partly because you’re working natively with a full field of view. So these different approaches that you guys are doing will provide. As Danny highlighted it is so important that they are different interactions. So we don’t want to build metadata behind that. That will slow any of you down. Right. But there should be enough transferable metadata about. Not just the static stuff inside this, but also the Fabian active stuff inside these things. And it should all to some degree be compressible to a 2D space. I was thinking Fabian, but Adam, do you have a quick point on this or or you’re just holding something? You’re muted.

Adam Wern: Let’s Fabian take that. He’s been run over. Over.

Fabien Benetou: Regarding the positioning. And the format, I think I said before, but I’m. I’m not advocating that specific format. It’s like the one I implemented, so it’s easier for me. We can have, like, filters that convert from one format to another. I think overall, none of them don’t seem like crazy one way or another. So as long as as long as there is some kind of use case, that’s what pulls, let’s say, the data we need to use. And then we focus on converting from one format to another, whatever format that is relatively easily. And also it makes it makes it easy to test. So I don’t honestly, that doesn’t strike me as a hard problem to solve. Sure, if like there is a cylinder space and another one is orthogonal or yeah, orthogonal, let’s say it’s there is some conversion to do, but it’s pretty normal to do. It’s the same way you go from a flat map to a spherical map. It’s it’s things that must be done anyway so that I’m not too worried about this. We just need to find, like what? What actually do we want to do after switching from one environment to the next? This way we it pulls, let’s say what what we need. One quick thing. So I have to leave also at 30. And I’ll, I refresh. So I hope that you can let me know if you can see my screen, please.

Frode Hegland: Yes. Yep.

Fabien Benetou: So that’s the trick for me. Just refreshing. So it’s it’s the same as before, except, like, following Denny’s comment, I couldn’t help but think. Indeed. I need to adjust the layout. And what if your usage is not my usage? Because your dexterity is not my dexterity. So what I do, you can see, I hope that I’m in the environment changing the layout. So the buttons from that menu change the colors till now. I can show them even when I don’t use them. And then I change. And when I go back and forth, they preserve their position. But and I wanted to show this because to me, I know it sounds maybe a little bit inappropriate when I put the headset while I’m chatting with you all. But what I get from those meeting is such ideas that I didn’t consider before, which is, like, the most valuable thing for me. And then I just can’t wait. I want to try it and be able to go from the idea the demo to the new idea that you provided, not me. And then I can try it and say it actually works and makes sense, and then show you again to me, the extremely precious. So I wanted to say thank you for this.

Frode Hegland: That’s really nice to hear from you. And we’re going to finish in six minutes. Emily also has some problems getting to Edgar to pick him up from school today. So it’s all fortuitous that most of us have to finish soon. But. So this is what I would like and tell me if it is reasonable or not by next Wednesday, okay, because we’re running short on time to better understand what to put in our paper, I really want Adam, Fabian and Andrew to have managed to have maybe even a half an hour meeting to do the basic sharing stuff, because Fabian, with all the love and respect, you’re going to get a big hug when you come here. I really get scared when you technical, brilliant people say it’s really easy and I’m not worried because to me that says I’m not going to do it now, right? So I really would like you to take what Andrew has as a Jason. If Andrew could reshare that, have a look at it and either say you will copy that style or tell Andrew to share something. Is that fair? Because just like we were stressing months ago getting something in XLR, now I’m a little bit stressing about being able to share. I don’t want to get into the summer and be like

Fabien Benetou: Yeah, sure. It’s send me the, the JSON file and I’ll display it and I’ll display it. How do you say, even incorrectly? It doesn’t matter. At least there will be, like names floating in space in the wrong position. And then if I get stuck, I ask Andrew. I’m sorry, but this. Yeah, I’ll regret it. But I’m not worried for this, so let’s say if I. Okay. In order to not sound too arrogant. If I get by the end of the working day today, such a JSON file, then for Monday next week or Wednesday. Worst. I’ll get something from someone else. Environment like Andrew to mine.

Frode Hegland: Well, that’s wonderful for Albion. You get to choose the first dinner we’re having when we’re together in London. Adam, how about you? Are you okay? Also getting this nitty gritty at this point.

Adam Wern: Yeah, but I’m still a bit reluctant to kind of the ex is sharing coordinate between different views. Doesn’t make so much sense to me. Other things like sharing a note or comment on a paper or something that would make sense to have in different views that you could pop up in different ways and different views. But I think taking one workspace and and trying to force that into another way of viewing it, it feels like Yeah, I don’t really see. I can understand it for smaller pieces, perhaps a small map, but for the whole interaction doesn’t make sense to me because it’s the kind of the space we’re transforming. So taking one coordinate or rotation from one system to another doesn’t make much sense. It can be done. But then I’m replicating the things the other person just did.

Frode Hegland: Okay, I see Fabian’s got his hand up, but the thing is. Yeah, that’s a really good addition, by the way, a note or an annotation. We need to take that into account, as well as kind of a floating element that needs to be connected in a way. But the use case we have here is someone going through the proceedings of one conference. Right. So these this specific goal. And that’s for someone to understand what all these different documents are. See what’s relevant to them. Hide the ones that are not. And then these Siri, I’m not talking to you and then trying to have access to, you know, various views. So it’s not going to be the world’s largest data set by default. Of course it can be bigger over time, but that is why I really think. You’re doing a thing here. You want to do it here? I expect you, Adam, to go absolutely crazy. But I would really like it if you could try to have the most basic replication as an initial thing before you answer. Fabian. Quick.

Fabien Benetou: Super quick. I put it’s called Google.org. In the chat is what I used in GIS project before. It’s for real world coordinates. But I think in terms of going from one set of coordinates to another I think it’s a worthwhile exercise because indeed, if we want to interface with others that have such a mission, we don’t care. So I’m not saying it’s going to be easy or useful or whatnot, but I think in principle it’s not the only place where it’s useful. So it’s, I think, to try at least it’s valuable.

Frode Hegland: Yeah. Good. Danny has to go somewhere. You have to go. Thank you. Danny I think today was Danny. Go, please.

Dene Grigar: Yeah. I just wanna say thank you, Xiaomu, for coming today. Good to see you. And thank you, everybody. And I will have the draft of the demo done this week. Okay.

Frode Hegland: Before you go, Danny, two, two words view spec Mark Anderson calls it view spec. That is really how we should think about this.

Speaker13: By I mean I think about what?

Dene Grigar: The paper. What? No, no.

Frode Hegland: The different people, the Andrew Adams forbidden worlds, in a sense. Forget the fact that it’s different code, the different views of the same data.

Dene Grigar: Hence that’s what makes sense. Okay, good. Thank you. Just kind of threw it out there, and I just. I didn’t see the the remark. Okay. Bye, everybody.

Speaker13: Bye bye.

Frode Hegland: Yeah. See you later. Those of you who have to leave. Andrew. Adam. Fabian, this is the time for you guys to really make sure you’re you’re happy with everything.

Andrew Thompson: I guess the main thing is, Fabian, you’re going to wait on the Jason for me to see if you can do anything with it. Adam has an interesting point of perhaps we all save our own positional data independently. We all give them, like, different variable names in the information. So if we want to interpret one another’s data, we can, but it’s not necessary. I don’t know, that’s an option. Fabian, my current Jason sort of test export I made for the library back when it was called. The library doesn’t have the positional data. It just has, like, a room for it. So I can get that to you, like right away. Or if you want me to actually get a version that has positional data that I currently use. I can try to get that today, but most likely that would be tomorrow.

Fabien Benetou: So just to clarify, I’m not in actual rush, so don’t stress over this. I think it’s better to delay a little bit to have actual position, and because otherwise I don’t know if my conversion, if I need to convert anything is correct. So I think it’s better to have the JSON or whatever format you want with positional data. And I’m also I mean, we’re all working with three.js. So theoretically in term of position, as long as you use world coordinates should be easy.

Andrew Thompson: Yeah, we’re just using world coordinates. That’s fine. And I could adjust the way I save it because like I said, I haven’t currently built a save system for the sort of map view. I’m saving the workspace information, which isn’t something that carries over from what I understand. So having the the map save, it’s kind of a bit theoretical right now, what we actually want. I assume we want, like, every tag to be saved. Different positioning and whatnot. Which. Now to think about it. The tags are set up as an array right now, so if we want information on that, that’s actually not going to work. Okay. We may have to do some changes with the layout.

Fabien Benetou: I gotta run, but I trust you.

Frode Hegland: Just briefly, before we let Fabian run away. I put this little stupid quote. Man, is the measure of all things. All I mean by that is we are working in a human space. We’re working in a room. We don’t have to worry about having a galaxy of information for this bit right now. Bye bye. Fabian.

Frode Hegland: So Adam, the Peter thing of the importance of having metadata and deleting it. Is that kind of what you were saying? We all have our different coordinate systems or all different metadata, and the different the other systems can choose whether to interpret that or not. Is that the approach you were thinking about?

Adam Wern: We are. That’s one part of it. It’s more that I struggle to see the exact use case from a kind of an interaction point where you would be in one workspace. And then use all those coordinates in a totally different workspace I can. I totally see it going from a laptop into a VR space. For example, taking having a 2D, two different 2D and a 2D in VR xoar that conversion is. But going from different kind of 3D environments doesn’t make so much, so much sense to me right now. As a use case, what would you actually do with the with the if you lay something out in, in this on the cylinder or like a perfect space for you there. What kind of information would you like to have in a completely different kind of 3D?

Frode Hegland: Yeah, that’s a very clever question. And just briefly before I hand over to Mark. You’re absolutely right. I would like to be able to go from a flat screen into XR, do stuff and go back and lose nothing, even if the XR, you know, I wanted to be able to be printed on paper, have visual media at the back, and if it scanned OCR again, you can still go to XR. That’s a prime dream for me. However, the intelligence inside the nodes and all of these things needs to be captured externally as well in the metadata. So there’s absolutely nothing wrong with, for instance, having only 2D, XR and back and forth the simplest and then having additional stuff or additional environments if you want to have it build an environment’s got nothing to do with that. That’s basically a sculpture.

Speaker13: Fine.

Frode Hegland: You know, that should be storable in a different way. It doesn’t have to fit on a flat page, necessarily. So I’m not completely disagreeing with you. I just think the notion of view specs, the notion of trying to have things reflected in different ways is useful, but it shouldn’t constrain us. Mark, please.

Mark Anderson: I’m just going to sort of reiterate my point about the view specs was really I that’s what I heard when I heard Adam speak earlier, in the sense that if I draw something, if I take the same data and I draw it as a tree map based on some arbitrary attribute of like maybe the number of words in that in that object will give me a different set of metadata to if I laid it out as a spatial hypertext, or if I laid it out as some other thing. So. Her sort of size and position may be very much tied to what you’re actually doing. So I think I think there’s an unintentional confusion creeping in here between me taking something I’ve laid out in my room, as it were, and putting the same information, maybe in a basically in a similar view spec of somebody else, as opposed to me going, taking the same data and just deciding that instead of seeing it in this form, I want to see it in that form, in which case much of that much of the positional metadata will no longer have meaning, simply because I’m no longer using that positional data in a meaningful way, or I’m using a different set of positional data.

Mark Anderson: So there are sort of it’s almost like there are two intersecting things that need to be captured. So some of it’s to do with just when I initialize this space, where the heck do I put it? And there’s other things that will be more tightly bound to what you are doing in that environment, if I can call it that. So in other words, say, for instance, when I go into the space that Andrews created for us, for instance, the demos, that’s like a space. If I then take that information and put it into something, say that Fabian has built, there may be a limited amount of information. And Adam, correct me if I’m wrong, but there’s a limited amount of the information that I’ve taken from the first place that has use in the second space, though we still want to hold on to it, because if for no other reason, then we might then want to take it back from the second space into the first space. I mean, have I understood correctly?

Adam Wern: Yeah. And we have a I’ve been I’ve been there many times in, in programing when you have, when you have two systems and you under different view, let’s say you are in 3D and add a new thing. And where will that land in 2D when you go back and the other way around, if you if you add a if you have your flat map and other thing in 2D next to something that is in 3D space. So it’s not it’s easy to gloss over these things, but very little information actually transfers. Well, and you need to kind of hand do most of it. Or do we. Yeah, do. And that’s one thing. But there are many things that that do transfer. And I think kind of making collections less and like annotations, they can transfer very well, because the most important part, there is not kind of the exquisite layout you created in one space, but the information in itself. And it could be just as well like with a note on your phone, you could have a very different layout on your phone. And it still works fine for most notes. Not all of them, but most notes can be in look very different. But the positional data is not one of those things. It’s very tied to the space.

Mark Anderson: It strikes me also that the positional data matters more if you’re dealing with things like if you’ve got essentially an environment that’s like a memory palace, it really matters that something’s on that side rather than this side. Otherwise, it may generally be the case that what you need to know is you need to know the spatial relationship of the object. So. So in other words, you may you may scratch from on a bigger screen or something. So the key thing is that relative to one another things are in the right place. So in the sense that the writers are flexible term. Whereas obviously if I took something that that had a very precise meaning in terms of, you know, we were projecting it onto a room, well, it’s no good if some of the things are underneath the floor of the room, because clearly then, then the, the artifice, the room doesn’t work. But but that’s why that’s how I can see this as sort of difference in the, in the, in the stuff we capture.

Frode Hegland: So this is the kind of discussion that is just I know I’ve said it before, it’s so important. I’m so grateful. Because just like with Fabian’s little pinching thing and it’s sticking to your finger, kind of, you know, there’s going to be so many unknowns here. The the basic dream. It’s kind of Harry Potter ish slash Minority report, right? You should be able to work in almost any media. And then go into craziness. Not everything you can take back with you. Of course not. But everything should be retained. So if you go back to that crazy environment, you will have that data so that we all agree on that. So the kind of different things to go in. We also really need to think about Adam’s notion of annotations in this environment, because an annotation may be just a thing on a thing. That’s simple, but it may be important where on a thing it is top left, bottom right. Maybe it means something for the user. It may be floating between two things. So when it comes to this, I just think we should build, test, build test build test. How exciting is this, guys? Literally building an entirely new way of looking at the world. Of course, Andrew, you have to do a lot of the coding so you may not share the excitement, but the outcome would be amazing.

Frode Hegland: Yeah. I mean, even just working on the map and author has turned out to be a real pain because incrementally, oh, no, this is not good. This is not good. So learning that stuff and trying to trying to get my programmers to do that right now is a hassle. Oh, hang on a second.

Speaker13: So with apologies to Mark. I have a book. All right.

Mark Anderson: I’m Betty Carew. It’s a new one.

Frode Hegland: Yes, I thought it was a I thought I preordered it, but I did actually order it, it turns out. It’s good. I highly recommend you all get it. I just glance through it. We invited him to be part of our work. So he hasn’t replied yet, you know. But the point is, there are so many ways of looking at this. So, Adam, thank you for not Letting us look at this in a simple way, because I know you’re going to do some crazy stuff.

Adam Wern: But you’re telling me as a crazy, crazy person, you just think that we we actually in this community, we’ve been doing more things than we remember for the last few years, and we investigated. I think we have forgotten that many of the learnings we could have so we really should dig them up. Sometimes it feels like we are starting over and we learn new things, and we’re. But that’s the nature of it. And it’s a new group. I really want us to to remember what we learned along the way. We tested lots of 3D stuff early on, or XR stuff or VR stuff and inventories. And we tried we tried games and discuss that and some of it we lost along the way. And it’s good to kind of quickly get that back into a place where because I hear the echoes of that. Right. In the.

Speaker13: Well, the Future.

Frode Hegland: Text Lab website, we do have links to at least what we did two years ago. So most of it works, which is good. I completely agree. It’s crucial we keep it the way that Andrew is working. Now, of course, every bit of code stays frozen for that version, and it’s on on our record, so I think that is absolutely important. I’m also very grateful now that we’re working to have a bundle of data and environments to work with it in, because at the end of this, I remember Doug was very disappointed after the 68 demo, he expected people to come and say, this is great, I’m going to compete with you. Let’s have fun, let’s make it better. And basically nothing happened. So I’m not that weird Doug Engelbart or anything like that. But I do think that with the start of the hypertext conference, put people in this look at real stuff that they care about, and then understand that this is data that’s shared in different environments already. You know, that’s that’s powerful. Peter, I see your hand.

Peter Wasilko: Yeah. I just wanted to say that it’s been a really great discussion today. I love that we’re not getting too locked in to the specific coordinates in one given implementation, it’s important to keep that high level in mind and really lean hard on visual meta and what we can do with that and extend that beyond it just being a copy of a bibliography of just BibTeX listings like it originally started. As I see it really growing and starting to mature, we can start thinking about adding new modules for new kinds of information that we can layer on a module for AI summarization, a module for entity and reference extraction, a module for those fuzzy resource leads that we get that we currently don’t have recorded formally anywhere that can make such a huge difference. And that’s sort of that little piece that I’ll be mailing you later today once I get a chance to give it a good close proofreading. But unfortunately for now, I need to drop and log in to a webinar starting at the top of the hour. Thank you. On Monday.

Frode Hegland: Thank you, thank you. I see your hand by Mark. I just want to obviously thank Peter for talking about visual meta and remind you guys all visual meta is is here’s data easy to find out easy to delete if you want to. I think that’s important, Mark.

Mark Anderson: I was really noting sort of more for the on going record. Just the fact that I think, I think a pertinent thing to, to us doing this here is that, funnily enough, the hypertext community and I mean that in the most expansive sense, not not necessarily the academic hypertext community. Only Umma’s is rather well placed to look at this, because it’s less grounded in sort of fixity. So for a long time it’s it’s been quite happy to, to, to grapple with things like multifaceted narrative and non-linear narrative. So this idea that thing has to have a form I think is less alien in the hypertext community than it is in many other places. And the other useful thing is it’s had a long lineage of, of both the, the purely creative and in a sense, the pure engineering sides sitting quite closely together because there’s always a certain certain degree of yin and yang, you know, you can’t build a bridge on hope it actually has to stand up. But by the same token if you have too much structure, it crushes the creativity. So I think it’s it’s really useful to be doing this in the context of something like the hypertext conference and through that, the wider community, because I think, I think they’re better placed to understand the broad issues rather than what can otherwise happen is people will will latch on to much more minor things, all of which need doing. So there will be big engineering problems in the corners. That will be a, you know, those in themselves will be a big job for somebody to sort out. But at this stage, trying trying to see the broad strands and what works and what might work, I think is, is well placed in hypertext.

Frode Hegland: Yeah, absolutely. And you’ll put the JSON again on slack so that Peter will see it. Right?

Speaker13: If.

Andrew Thompson: Yeah, I thought we’d agreed that I was going to make some adjustments to it, though, to include the coordinate system.

Speaker13: Sure.

Frode Hegland: No rush, just whatever. Just. I just wanted to highlight, you know

Speaker13: Now.

Frode Hegland: Oh, by the way, a very British thing. Mark on the way that I walk to school in the morning, there was a lot of overhanging branches. Kind of dangerous. People are kind of have to go into the road. So I complained to the council.

Speaker13: Last week.

Frode Hegland: They actually have a web based system for taking those kinds of issues. And today Emily just texted a picture. The branches have been cut. That doesn’t happen very often that deserve to be on the record. But anyway In closing from my side. Anyway I am so grateful for the very, very different perspectives in this group because we are dealing with something entirely new and not new. And I think each one of us sees new and not new quite differently. And that is really, really, really important. The one legacy that I want to have from this work is to be able to inspire other people to think about information interaction in different ways. I think we’re really working on that. And Mark and Andrew, just to clarify, so Adam is kind of our official outsider insider for the first six months, Fabian, for the next six months. But in terms of specific timing doesn’t matter. But they are the key doing other stuff that comes into it. So it’s not just a general discussion here. It’s more formal in a way. Adam, do you want to say anything else on that topic?

Adam Wern: What mostly is that I will start with the kind of what market I’ve been doing, kind of the the visualizations of hypertext, but I’m very interested in I feel that I want to I would be fun or very interesting to do more of zr’s things. Right now we’re doing the 2D translation. Yeah, mostly 2D stuff in 3D. Would be interesting to fully take take the hands on gestures. Voice. And perhaps a bit more immersive, three dimensional things as well. Just so we, we really use the medium. We already know what the critique will be in that way. I can do that on my screen. I want to have a few things that can’t be done on screen or easily done on screen just to push the boundaries a bit. But we could also push the boundaries in just doing good interfaces that have that haven’t been done so far in 2D, 2D, even though they should have been. Or with metadata, or with interaction, or with writing or with reading. There are so many, even low hanging fruits that are not picked in a kind of e-book. Readers and everything. So we have lots of opportunities wherever we start now, because things are not that well implemented or well good in in the text space at least.

Frode Hegland: I’m so glad to have that as towards our kind of closing comments today, because I think we’re on so much on the same page. We need to do something that is obvious but has to be done. And Andrew, that is primarily you. You’re very good at making these things work, and you are our safety catch because what you’re building, you know, it is 2D and it’s a cylinder in one level obvious. But then we see how bloody complicated it is to do well. And then Adam does some crazy stuff that most of it we expect not to make any sense. But that doesn’t matter. This is going in our future of textbook. It’s going in the Sloan thing. It’s going in Hypertexts thing. So yeah, we’re combining. Sensible with completely. What in the world is just wonderful?

Adam Wern: Yeah, but we shouldn’t typecast ourselves too hard, I think. No, no I’m sorry. So I will do. I will do boring, ordinary things. Useful things. And I’m sure. And I’ve already seen Andrew do kind of good interaction that only make sense in X or, and that’s kind of new breaking new ground. So exploring new alternatives and new. So it’s important to not be too hardly typecast but to. Yeah.

Speaker13: I just wanted.

Frode Hegland: To show you something briefly on that point. Obviously, I very much agree with you. So here is some thesis. Oh, what’s going on there?

Speaker13: Of course.

Frode Hegland: Now that I’m on here with you guys, I’m having crashes.

Speaker13: But.

Speaker13: And see if we.

Frode Hegland: Can open this. There we go.

Speaker13: So, you.

Frode Hegland: Know, I’m an artist by training and by birth, basically. Don’t worry. It’s my computer being slow. Not your view. So I’d like to do the way out stuff. But this is what I’ve been working on most of the time recently. It’s this.

Speaker13: Context.

Frode Hegland: Menu.

Speaker13: Because I think.

Frode Hegland: It is so insanely important to have a context menu that is useful because once you have a long context menu, you know in author, the context menu is awful. Look at that. You know, I’m trying to make it smaller. People will not read that. But in here. So, you know, basic testing like we’re here, we have the ask AI stuff, and it’s really nice to have the edit button here so you can instantly change what the commands are. And here we have annotation now many colors. And the key thing is you can just do R to get it red. That doesn’t work yet, but I just want to show you the last ones. Obviously we have text quote copious text, but also copy us a quote meaning it has visual meta attached. Those should be options with keyboard shortcuts. Lyft does this. It puts it in a differently readable way. But then this is the last thing that I’ve been working on. We have finding document finding library but find online. Is actually the enter key now. So when you’re talking about not putting ourselves in a box, I agree with you. I just think that you coming on board in this particular capacity. Adam, working on this, we have more freedom, that’s all.

Speaker13: Right.

Frode Hegland: Andrew, any comments, questions or poetry?

Andrew Thompson: No, I think I’m. I’m good.

Speaker13: Yeah. Good, good.

Speaker13: All right.

Frode Hegland: So.

Chat log:

16:03:33 From Fabien Benetou : ugh OK apologies, sth else started in the background… with introductions so I was VERY confused!

16:05:51 From Mark Anderson : Sorry – b/band fixed. Phew.

16:09:40 From Frode Hegland : ‘Shamul’

16:10:05 From Samya Roy : “Shammo”

16:11:23 From Frode Hegland : Reacted to “”Shammo”” with 🔥

16:13:10 From Frode Hegland : Today’s Agenda: https://public.3.basecamp.com/p/1A96v8G8MxBSUBF6PhaQTecc

16:19:45 From Mark Anderson : Back.  Having a proper First World Problem day!

16:21:08 From Samya Roy : I am curious for the student thing. Might apply. I already have us visa.

16:25:44 From Mark Anderson : @Adam’s iPhone  – hopefully you got an email invite to Overleaf for The HT viz demo. May not be there yet – I did it just before the call.

16:29:45 From Frode Hegland : Reacted to “I am curious for the…” with 👍

16:31:53 From Mark Anderson : Sorry for noises off – DIY next door. 🙄

16:32:45 From Frode Hegland : Reacted to “Sorry for noises off…” with 😃

16:35:22 From Frode Hegland : https://x.com/sainimatic/status/1790450802173309420

16:35:45 From Fabien Benetou : Reacted to “https://x.com/sain…” with 👌

16:36:31 From Mark Anderson : Possible title for my demo with Adam “Visualising The ACM Hypertext Corpus in XR” . Note the visualising aspect. IOW this is essentially dataviz so not stepping on the toes of our main demo.

16:36:45 From Frode Hegland : Reacted to “Possible title for m…” with 👍

16:36:50 From Dene Grigar : Replying to “Possible title for m…”

that sounds great

16:37:19 From Mark Anderson : Replying to “Possible title for m…”

A serving suggestion’ 🙂

16:37:28 From Frode Hegland : Reacted to “HT-24 Visualisations – Online LaTeX Editor Overleaf 2024-05-15 16-36-43.png” with 🔥

16:37:35 From Dene Grigar : Replying to “Possible title for m…”

The demo is about where we are in the project right now, focusing on the spatial hyper textual aspect of our work

16:40:22 From Mark Anderson : Reacted to “The demo is about wh…” with 👍

16:40:36 From Fabien Benetou : https://www.olegfrolov.design/spatialcomputing

16:43:15 From Mark Anderson : Oleg’s ‘bouncing DVDs’ demo looks like a new spin on choosing what to watch on screen.  You know the one you want, but can you catch it to select it?

16:45:48 From Andrew Thompson : You can just create a YouTube link with an embedded start time for this part of the video.

16:46:07 From Dene Grigar : Reacted to “You can just create …” with 👍

16:47:24 From Peter Wasilko : Reacted to “HT-24 Visualisations – Online LaTeX Editor Overleaf 2024-05-15 16-36-43.png” with 🔥

16:48:55 From Frode Hegland : Smart Mapping Nodes….

16:49:48 From Dene Grigar : I am imagining that the way we can frame this for the Sloan grant is that since we are building for academics, academics hail from different fields of study and so may require different environments to work in

16:51:04 From Mark Anderson : I assume – for the record the Proceedings we are using is 2022 (as that one is also in HTML)

16:51:19 From Dene Grigar : Reacted to “I assume – for the r…” with 👍

16:51:57 From Dene Grigar : Twine offers several different “environments” depending on what the user wants to create

16:52:38 From Samya Roy : Twine is brilliant yes

16:52:40 From Frode Hegland : 2024. Should be able to get the HTML versions. I’ll check with Wayne

16:53:06 From Mark Anderson : Agree that different domains may want different affordances. E.g. physical sciences are going to want tight coupling to contingent data sources. This is emergent on the data re-sue field “it’s in the paper” doesn’t cut it any more.

16:53:15 From Peter Wasilko  To  Frode Hegland(privately) : Yikes, I’m at 829 words.

16:53:17 From Dene Grigar : Replying to “Agree that different…”


16:54:00 From Frode Hegland  To  Peter Wasilko(privately) : Reacted to “Yikes, I’m at 829 wo…” with 😂

16:54:22 From Frode Hegland  To  Peter Wasilko(privately) : We will only be submitting 2 pages! Also for our book though

17:00:50 From Peter Wasilko  To  Frode Hegland(privately) : It might be a challenge to compress it, unless we use micro-fiche sized type!

17:03:47 From Mark Anderson : I think that we’re interacting with is an unintentional reflection of—the lack of—internal metadata, it terms of which text is part of what … object (and of course we don’t yet know what some of these objects are!).

17:04:18 From Frode Hegland : I think you and I should do a proper discussion on this separately, it’s too important.

17:05:30 From Mark Anderson : As we have access to HT’22 HTML we can try adding ad hoc (elements), e.g. to identify images/table/text that refer to the same thing.

17:12:22 From Peter Wasilko  To  Frode Hegland(privately) : 863 word now

17:12:27 From Fabien Benetou : didn’t get my turn but wanted to say I shared the CSLJSON with additional metadata starting with position few weeks ago

17:12:36 From Peter Wasilko  To  Frode Hegland(privately) : Going in the wrong direction.

17:12:56 From Dene Grigar : brb

17:13:03 From Mark Anderson : Reacted to “didn’t get my turn b…” with 👍

17:13:28 From Frode Hegland : Fabien, please speak that point, it’s important 🙂

17:16:42 From Samya Roy : Could you share a link?

17:17:05 From Dene Grigar : https://ht.acm.org/ht2024/workshops/intr-ht-summer-school/

17:19:35 From Mark Anderson : The HT summer school is the Sat/Sun preceding the conference workshops (and the Conf). So no loss of time.  Dene and I (and Andrew?) will be there for the Summer school, anyway). If we get to the point of a workable experiment, the main extra work is the ethics approval, user sign-off, etc.

17:20:19 From Leon van Kammen : have to go unfortunately.Thanks for the brainstorm so  far!

17:21:09 From Dene Grigar : I am leaving at 9:30

17:21:29 From Mark Anderson : I wonder if Claus’ ‘Mother’ Spatial HT has been tried in VR/XR, even if only to se what it look like.

17:21:55 From Fabien Benetou : leaving at :30

17:22:23 From Rob Swigart : Also leaving at 9:30… back Monday if I have electricity (There will be  shut off, but may not effect me.)

17:23:35 From Frode Hegland : We will finish at half past today then.

17:25:21 From Fabien Benetou : that updated demo based on the user adapter menu layout by the user https://twitter.com/utopiah/status/1790779174342230087

17:28:22 From Dene Grigar : Andrew, what time are you coming to campus today?

17:28:39 From Andrew Thompson : I’m aiming for around noon if I can make it by then

17:28:44 From Dene Grigar : okay

17:29:02 From Fabien Benetou : https://gdal.org

17:29:03 From Dene Grigar : I am leaving in two minutes.

17:29:10 From Dene Grigar : 1 minute

17:29:10 From Frode Hegland : See you Monday!

17:29:27 From Dene Grigar : I’ll send a draft of the Demo this week

17:29:33 From Dene Grigar : bye folks!

17:29:35 From Mark Anderson : It seems at play is here is different views are like view specs – not each one uses all the available data.

17:29:44 From Frode Hegland : Reacted to “It seems at play is …” with 🔥

17:30:17 From Samya Roy : I enjoyed being here. See you folx soon 🙂

17:30:25 From Fabien Benetou : view spec to coordinate converting

17:31:20 From Frode Hegland : ‘Man is the measure of all things’

17:31:48 From Peter Wasilko : I would like a copy of the full JSON at your convenience.

17:34:45 From Peter Wasilko : If you build it, they will come.

17:49:53 From Frode Hegland : The book is by Alberto Cairo: The art of insight

17:50:35 From Mark Anderson : Replying to “The book is by Alber…”

Will join his others on my shelves

17:52:46 From Mark Anderson : Replying to “The book is by Alber…”

Arrives tomorrow.

Leave a comment

Your email address will not be published. Required fields are marked *