22 April 2024

Frode Hegland: Oh, did I just crash a party or something?

Speaker1: Yeah. Go away. We were just talking. No big deal.

Frode Hegland: Everyone.

Speaker1: You’re just chatting. No big deal. Yeah, bro, I’m on the Apple site, and as you know, it’s You’ve got to do all these tests.

Frode Hegland: You’re absolutely right. And I just have to remember how I managed to get around it. So then we’ll see. And then the issue is good old Adam and If we don’t get a positive reply from Sloane by Thursday evening, I’m going to just order one on Friday and then figure out how to pay for it later. So I’ve been texting with Rob to have a delivery there. But Rob, you’re right, it probably means Adam has to order it and use you as a delivery address. I think that’s what I did. I used as a delivery address.

Speaker1: So yes, I should to you account.

Frode Hegland: Sorry, Danny.

Dene Grigar: And then I shipped it to you. And it’s about $250 to insure it and ship it.

Frode Hegland: Yeah. No, they that’s not exactly a small thing. Exactly. So. So, yeah. Anyway, we will we’ll see about that as we get closer to the time, but it is a bit strange that there then what, two months now? We’ve been waiting for Sloan on this.

Dene Grigar: Yeah, but, you know, they probably don’t get a lot of these kind of requests and they’re not. Moving very quickly, but I’ll tickle the development people on the grant people one more time to see if we can get them to do something, because we have two people that need it now, right? We’re going to be. Doing that for two people or one person? One person and payments or how’s it going to work?

Frode Hegland: One person this and one person that. Exactly. So it’ll be it’ll be different.

Frode Hegland: So that’s interesting now. There are a few things. Okay? A few things. You’ve all been emailed. What? I’ve worked so medium. Not very hard on, but medium hard on the I kind of summary. Hello. Have you had a chance to look at it? Briefly?

Dene Grigar: Because can I mention that We probably want to do. What is this? Is this an internal document? Are we going to publish this?

Frode Hegland: It’s an internal document for now. Absolutely. So the thinking is that at a certain point we edit it and we release it. But I am extremely impressed with it. Except that gets a few things wrong. Like Fabian’s last name, it misses an E with great consistency. Just a few things like that. So what I was hoping we could do going forward is I would produce one after every meeting and then email to everybody. And if there’s something you don’t you believe does not reflect, you tell me and I’ll just edit it. We don’t have to have it. I pure obviously. It just seems so much more readable than to go through the huge transcripts. And now I am doing every meeting transcribed and put on without access to other people on our website. So that’s exciting. Hello, Andrew. Okay. I’m just going to show you something that Very relevant to you, Andrew, particularly, whereas. But I don’t think we are full here. I think at least Fabienne will join us, but please have a look at the invitation and just look at the image on there. I’ve updated it a bit. So you all see the little logo thing is The little gray thing, as opposed to be our gray sphere kind of snuck in there. Hello, Mr. Fabian and Mr. Peter.

Andrew Thompson: I have a very shallow, like, faint cyan light in the environment. I could try swapping that to orange to match the the logo here. It’s really subtle, though. It doesn’t do much.

Frode Hegland: What do you think, Denny?

Dene Grigar: We teach consistency of design in the program. So whatever you whatever you tell us the color should be, we’ll embed it into the environment. So that’s going to be our color palette. So normally we start with the mood board. Then we develop a color palette. Then we develop the typography. Then we execute the design. So we haven’t done any design work in this environment yet. So we’re just waiting. But this is going to be the color palette and the typography. Then we will move that direction.

Frode Hegland: So the thing is I just put a link into our future text publishing for volume five, and all I did was just start with what was on the Future Text website with the orange and the gray. Yeah. And then the addition of that little spare. It would be easy to over design this, but we certainly don’t want it to look either shoddy or irrelevant. So our pitch about how the boundary of knowledge and the boundary of knowing more about what we don’t know fits with the sphere. So that’s why I thought the sphere and Andrews, the sphere on the website kind of connects.

Dene Grigar: Seems to me if that’s going to be the full. I mean, I love the philosophical framework you’re putting forward, but if that’s the case, then maybe we should have instead of the definite boundary around that final orange circle, it should be kind of pixelated out. So it expands it. I mean, if we’re arguing that we that there’s not a confine. Notion of knowledge that we’re trying to push the boundary, whatever boundaries that are there are arbitrary, right? And part of what our job is to do is to go beyond what is current knowledge. Then there is no fine. Then there is no finite boundary, so that final little edge could be pixelating out.

Frode Hegland: Yeah. Okay. I’ll, I’ll look at that as well.

Andrew Thompson: One thing that I find interesting here, I’m not sure if this was intentional fraud. But the sphere, like, exactly has those two layers just because of, like, the necessity of the mechanics. So when you, when you tap and hold, the outer one expands around you and the other one stays there, so it very much. Fits with what we have. Symbolically, I don’t know what that means, because that was just a a necessary mechanic. So you had an obvious button. But yeah, it does fit.

Frode Hegland: So I’m not sorry. The line is a bit weird. I interrupted both you and Dean. Sorry. Please. Please continue.

Andrew Thompson: It actually didn’t sound like you interrupted on my end. I was finished, okay.

Frode Hegland: Yeah, it is on purpose I’ve gone through. A few. Today. And the whole double circle made the most sense.

Andrew Thompson: If if I have some extra time after implementing the sort of. Primary reading distance thing that the focus this week. I could try to put some of those orange lines into the environment just to see what they look like. Could be kind of interesting to see, just like inside the library when it opens up. Not anywhere else.

Speaker1: Yeah. I wouldn’t spend.

Andrew Thompson: Too long on it so we could.

Dene Grigar: Sorry. I’m sorry. I thought you’re through.

Andrew Thompson: Sorry.

Dene Grigar: Teeny little lag. Peter’s comment. The XR app should match the color scheme of the book cover art. So I think everything should coalesce into one schema.

Frode Hegland: Yeah, absolutely.

Andrew Thompson: We’ve I think we’ve always known that we wanted a splash of color somewhere. But we didn’t know what that color was. We can just go with orange. Before I had been doing slightly the off blue just because the website color was that, but I. I do quite like the orange.

Frode Hegland: You guys know where the fruits orange comes from? You know, obviously the word orange comes from the fruit, which is cool. The fruit actually comes from China. I find that so weird. You know, Chinese restaurants at the end of often you get pieces of orange. I just thought that was silly. But that’s so weird. Okay, the world is weird. Right. So there are a few. I know we’re all over the place today, and Fabian is going to do a demo today. I think that’ll be a really good chunk of time for us to spend on. But I wanted to go through a little bit of our. So I’m just looking things up here. Our. Case study. I just uploaded a huge document that I don’t expect anyone to have a chance to read, but I’m going to share one single slide, not even a slideshow. And that is this. So the top bit is currently how reader provision works. When you tap on the screen, you get options on the top. So this is really important to take the pulse of where we are. I feel that an external piece of software should be able to add a sphere, literally our sphere. The example here has it at the bottom. If you tap that, the following should happen and it should happen in the use case.

Frode Hegland: The first use case which is preparing to write an academic document doing a bit of literature review. So it takes this opens a web browser, goes to the web page for this, where we then have the sphere that Andrew has built. So the sphere is the continuity, right, rather than just a button. And here’s the key thing that we need to discuss and see if we agree on what should be transmitted. Should let first of all very important context for the initial use case. The documents are from ACM. Our users are ACM hypertext. That’s our community. Acm currently has all the documents available as PDF and HTML. So what I think we should do because we don’t want to be too crazy, is you click on that dot and what is uploaded or presented in the full XR experience is this document as from built from HTML but rendered in a familiar way. So at least initially, it looks like it’s plain text on a white background, so it doesn’t look too crazy. But that should then allow us to relatively easily have multiple views, such as one screen per section and all of that good stuff. What’s the feeling of that part? That’s the first half.

Dene Grigar: Well, that sounds like what we’ve been talking about.

Speaker1: Yeah.

Frode Hegland: Yeah, exactly. That’s that’s my feeling. So then the second half is the entire library of the user. At least the metadata for the library should also go up. So when going through a document and getting to the reference section and all of that stuff, they should instantly be able to follow links if they are in the user’s library. So if we all agree on this basic mechanism, then we have two questions. One is how to upload the data. And I’m so glad we have both Fabian and Leon here. And the second one is I really think we can start doing design work for how ratings should be, because DNA has a wish being our prime academic user of having 360, which is fine, but there are so many real world considerations. One of them is that when you sit at a desk and you’re suddenly in a 360 environment, you may reach out and hit your computer that’s on the screen. These are realities that we need to consider so that we can deliver on this, because we all agree. Once you’re an XR should be a 360 swivel environment, right? Not necessarily for the entire time. I say, Fabian, if you want to do forward facing and put things to the side, you should also be able to work in that mode. But you shouldn’t be restricted. So 360 is not feasible. Yardeni.

Dene Grigar: Let’s imagine it this way. I mean, I’m not going to be reading like this. That’s not how we read, right? But we should be able to put things all around us. So if the 360 environment is like a virtual desk. So that everything can be placed where we want. I don’t have enough space on two desks right now. If I had three more desks, I still wouldn’t have enough space. So if we think about that space as more of a place where we can put things and move things. And I can read over here for a while, and maybe I can read over here for a while. Maybe you put something over here, move it over here, and know that froda one of the reasons why I didn’t want to zoom off this computer was because this desk is a standing desk. It goes up and down, and by the seventh damn zoom meeting before noon, I want to stand. Right. So if I’m zooming all day, I have to sit right here and look at it. That’s very sweet. I can I can go up and down and up and down, which is great. So now I’m zooming on two places so I can at least stand some of the day. Our meetings I’m sitting, but after that I’m standing most of the day.

Speaker1: Yeah, that’s.

Frode Hegland: Really, really important. And flying to Asia and back I spent a good couple of hours on the headset to be in that cramped position, and my God, it really changed the whole dynamic of being there, sitting with someone’s seat inches from my face. But, you know, I’m in the forest. It all works. However, the issue of where things are is so important, and that’s why I’m looking forward to Fabian presenting in a few minutes, because you know, like in Soft Space, there’s this really cool thing. You make a fist motion and then you move the entire environment. So one of the things we could have here is you. You do a certain gesture and you move the entire environment around. So now that’s your front. Right. So you’re reading this bit and you’re reading this bit. And the notion of having a focus area is obviously really important. So on our. Future text lab page where we have the research question slash use cases. I’ve taken what Denny put in slack and what some of you have also put into user stories, which is something that we should keep refining. Yeah. Fabian.

Fabien Benetou: So I haven’t tried it. But as I as I hear you to talk, I’m thinking that one way that might work is. Yes, you have. I mean, just for the context on my desk, I have just one screen. I know that as a computer professional, that makes me look dumb, but like, theoretically, the more screens you have, the smarter you look like, at least in movies. But I stick to one screen because I think, ergonomically speaking, it is. Better yet, I have a bunch of physical papers and notes on the side, and even physical post-its around on my desk. So there are different areas, physical areas with different properties and different capabilities. And what I think is most of the time, indeed, like the full focus is on my one single screen to either read or code or both and the rest is annexed. And usually I turn on my chair for a minute and then I turn back. So what I how I would imagine that in XR is, for example, I pinch the environment I flip or I turn it as a cylinder around me, and when I release, then it snaps back where it was. So basically you have the main position where you’re at and then you just like an elastic. Basically it goes back to the main default position.

Frode Hegland: Yeah, I think that makes a lot of sense. And and on that, I think that’s exactly that’s exactly what you’re talking about as well. Right. Well she’s off. That’s fine. I just wanted to show you this. This is a mock up screenshot of how I’m going to change reader, because I had all these different views and even me, the designer, I kept forgetting them. So the reason I’m showing you this is it’s an interaction in one spot, but it is about changing what that one spot presents, which is really useful, I think. So the new way it’s going to be in reader, whether it’s iOS version. Yeah. So Denny Fabian was just talking, actually. Do you mind repeating the last bit? Fabian?

Fabien Benetou: Yeah. I was imagining that you would be able to change the working area by pinching and holding, for example. But then on release, it snaps back like an elastic. It goes back to the default position which is facing you when you started.

Frode Hegland: I think that’s genius and useful and important. Vision does not do that. And that’s why it’s really neat. When I demo to people, you can put things everywhere, but they’re not really accessible. So that’s important. So Danny, the little screenshot here of a mockup is of relevance because currently I have all kinds of keyboard shortcuts for views in reader, but even me as a designer forget what they are. So what? This is when you fold a document, and this is based on most of the views, and reader being based on having the headings available because they help you orient where you are. Right. So now at the bottom of the screen you can do outline or an outline showing the highlights or names, glossary, blah blah blah. Not a huge point, but it is. It’s just what I found recently with the headset. Yes, it’s nice to move around, but sometimes I just want to change what’s in front of me, really interactively. Right. So.

Dene Grigar: May I ask how hard that will be all to implement? Andrew.

Andrew Thompson: I mean, it it sounds like it’s something that’s fairly down the line. So I’m not sure. Just having it, like, return to a starting position. I’m. I may be missing something, but I don’t really see the value in that. Since you could already just move stuff wherever you want it.

Frode Hegland: Yeah. So imagine this you’re reading something and you’re really engrossed in it. But because we’re doing HTML rendered beautiful documents, it’s going to be not the whole thing. But you focus on that and then you want to go something else. And let’s say for the sake of this, you’ve chosen a layout which is every chapter heading is a virtual page. So, you know, it’s kind of huge. So this point you may want to go over and look at this bit, stay there. But then instead of having to kind of finesse your way back, you have kind of a home button, so to speak.

Andrew Thompson: Right. Having a button makes sense, but having to like pinch and keep it pinched the whole time. And as soon as you let go of the pinch, it returns back. Mechanically it sounds really cool, but when reading a document that seems like it would get annoying, especially since the headset occasionally loses tracking and it makes you release even if you don’t. So you’re going to be reading in place, and it’s going to just fly back to the start. Having like a dedicated button, like, or maybe add it to the menu or something would make sense. I don’t know, they’re just moving stuff around. Isn’t that hard? I’ve. I could just save a location and have it return to that point. Yeah. So in theory, it shouldn’t be that difficult. Just don’t know the practicality of the current idea.

Speaker1: Okay. It.

Frode Hegland: So then Okay, I want to say hi real quick. No, it’s just first day of school. So so then the question I really want, because we’ve gone through this in different ways so many times to be able for the button. In my software, representing an external developer to send a URL with stuff. Through the webXR stuff that includes the whole document. I believe Leon and Fabian have been looking into this lately. Have you had a chance to to see what they have been thinking about? Or do you guys want to talk about it or.

Speaker1: You mean the document.

Dene Grigar: About reading and and references and all of that?

Frode Hegland: I’m talking about being able to upload from one place to webXR.

Dene Grigar: I’m trying to keep up with all the documents. So we have the one that. This one you just showed. We have the one about the reading and references and what’s the third one?

Frode Hegland: Know what I mean is.

Speaker1: So I just took a peanut.

Frode Hegland: What I mean.

Speaker1: Is.

Frode Hegland: You, the user. You’re reading a PDF. And you click a button. And now that document is in webXR. So we’re talking about the mechanism. We’ve gone through this many times. So I think we should just nail down something. The mechanism through which that button sends it to the environment. I saw that Leon and Fabian had posted some things about slack. Do you guys want to maybe mention what that is?

Leon Van Kammen: Yeah. We can. Can you hear me in the first Go on. That’s a great start. So. Well, there are various experiments, so I will just choose the one which is You know, the freshest in my in my mind, I think regarding bringing in documents into webXR there, there are various levels. For example bringing a document into the library bringing a document into, you know, spatial, spatial, a spatial version from the library into your immersive environment, or bringing a document from somebody else’s library or bringing or viewing a document based on a link. So there’s there’s various entry points for bringing a document into the XR space. What we or. Yeah. What what I was talking about with Fabian lately was based on an experiment where we basically had two web browser tabs open. And where we could basically send a data from one tab to another tab. So in theory, you can have two webXR experiences. If you have two browser tabs and you could have one tab connected to Freud’s library and maybe another tab connected to Andrew’s library. And then through this code snippet, you could basically send a document from one tab to another tab, basically copying, copying it to somebody else’s library. Was about. And surprisingly, it was not. You know, it was not a very difficult. It was actually very easy. Since every browser tab has a little bit of storage and that can be shared across across tabs. And that’s how we could basically the other tab could sense when something was put into this tab storage and could basically pull. Without them. So that was basically the experiment we did. And we wrote that into the chat. Just, you know, maybe it’s useful, maybe not, but yeah.

Andrew Thompson: If I could respond a little bit. First of all, I think it’s a very clever solution. That’s pretty cool. And I’m sure we can find some use for it for sure. Now my understanding of local storage, I worked with it a little bit, but not nearly as much as you guys have here. I believe that only works for browser tabs that are running in the same browser, so you can’t be, like, across different browser types. Things like that. And we want to go specifically from reader as our use case, which is a standalone application. And I don’t believe that even talks to local storage at all. Do you know if there’s a way to still use this to kind of almost hijack the browser behavior and save stuff to local storage using reader, and then access that from a tab or for that case, should we look for a different route?

Speaker1: Yeah.

Leon Van Kammen: It’s correct what you’re saying. I think if you want to bring in something really from the outside, it’s either going to be some kind of networking solution which is more complex, or it’s going to be the copy paste escape hatch, basically. Or maybe even a combination of that with some simple networking functionality. Okay.

Andrew Thompson: Sorry. Do you mind if I respond before yeah. That that that makes sense. That’s more in line with what I was thinking. I know in the past, kind of earlier stages of these meetings Fred mentioned that it wasn’t difficult to set up like a like a little upload using reader to just, like, put it onto a dedicated server and then just store the information there. Is that still something that could, we could potentially do to have reader upload the, the document with the visual meta talking about where it should go in XHR onto a dedicated server on your server somewhere, and then just link to that with the same button.

Frode Hegland: So from my side I like the experts answer that properly, but I’m willing to invest some more in breeder to support this. However even though at some point we want any developer to be able to integrate I don’t want to slow things down. So if we decide and I’m looking at Udeni because this is really your call, because you have a much better understanding of this side of it if yes. Sorry. Buttons. If it is significantly easier for the end user to drag a document to a specific web browser tab and then have it available in XHR, we should probably do that. It’s taken many, many months of many different discussions. To have a more elegant solution. But at a certain point, our main thing, our promise to Sloane, is that it’s the user’s own data. We can maybe make it more elegant over time. Peter and then Brandel, if you have perspectives on this, please.

Speaker7: Okay. Another possible means of integration would be to rely on a file watcher, so that each process could be writing to a known location on disk, and each process could also be monitoring the sync file at that location, so that a file change made by someone else on that file would be detected. Then you pull in the most recent version. So basically you sort of like have an auto. Of the latest change of the file on disk, and if you become newer than the file on disk, you’ll write out to that file. Then anyone else watching that file would see its timestamp jump and would be able to pull the data in that way. So it’s sort of a way of avoiding network communication, relying on the disk file storage mechanism substrate instead.

Frode Hegland: Yeah, that makes sense to me, but I’m going to give it over to to Brundle.

Brandel Zachernuk: If the question is whether a web based experience on a headset has the ability to see any changes on another non server computer, be it, you know, the the user, the user’s laptop or, or anything else, the answer is no. The only way to do that is for them to either talk to each other based on something like the peer peer connections that can be available via things like peer socket connections or via RTC. The the solution that I had up and running as a proof of concept for for fraud is radio documents is stateless in that nothing is actually uploaded to a server. The up side of that, meaning that you don’t need any specific infrastructure, and all you need on the server side is the matchmaking capability to put those things back and forth between them. The downside is that it is stateless and there is no retention of those documents per user. If you want to upload documents to a server, then you need to have that scale with the quantity of users and the user data. And that at some level does make things simpler because you just have a, you know, a create read, update, delete Crud app set up where there’s a user account and you can upload all of the data that you want into there. And those are not hard to make. There’s a, you know, the, the internet and I don’t say that dismissively, the internet is replete with examples of how to do that, but it changes somewhat, the sort of the nature and cadence of updates and structure that you, you need to gather in order to be able to achieve that. So it’s definitely an option. It’s also possible to do both at once, so that you have the ability to make a real time upload at the same time as you are doing that. But it really depends on how much infrastructure you’re willing to build. And, and how fast you’re wanting those updates to be able to take.

Dene Grigar: Stalin respond to that and everything else that’s been said about this. What I don’t want to see us do is pick a direction we’re going to go. Saying, we just want to get moving with this, right? And we want the it doesn’t matter if it’s elegant or not, and then get to a point where we say, oh my God, you know, this is never going to work for a whole lot of people, right? This is not usable in the long run and have to circle back and start over again and rip out everything we’ve done. So if the thing we’re building, whether in my hand if the thing we’re building there in my hand can be built upon so that it becomes elegant over time, then that’s the answer, right? So the, you know, easiest route that can be, we can call it, you know, scope, you know, stay within a scope. Yet at the same time, we have stretch goals, right? We stretch to the next goal. That makes sense to me.

Frode Hegland: Yeah, exactly.

Andrew Thompson: I think as a as a minimum, we agree you want to be able to import your own documents. That is currently done through the library, which from our initial discussion, it seems like that accomplished, that the user just builds a library of their documents, whether by hand or with some software and imports that. But we can, of course, change that to just load a single document. The code still is in there. That’s how we originally did it. So that’s not hard if we want to just have a button that says, like, view a document and you can just access your library there, like your internal library. Not one of our libraries. Conflicting term’s. That’s that’s fine. That that totally makes sense. And then if we want to streamline it later. Where it’s no longer accessing a document on your machine, it’s now accessing a link to a document. How that link was created can be determined individually if that’s uploading to a server and then retrieving it. If that’s something that was like already there, it that’s fine. It could just be like some hash ID or something. That seems expandable to me. And also doesn’t break anything if we don’t go to the next step.

Speaker1: So, you know.

Frode Hegland: Based on Chinese wisdom there. Sorry if that sounds sarcastic. It isn’t. You were writing us. We shouldn’t just do something quick and dirty. So let me see if we all agree on the ideal solution. The way I see the ideal solution is. Someone has a library and some kind of a way on a folder somewhere. They do a simple setup thing once. When they are reading and external software and they want to go XLR. Ideally, they just click a button, open the headset. And obviously I have to be at a certain URL. There it is. But it needs to be that simple. Now, now that we’ve really highlighted that it needs to be. So that someone who is not super. Interested in this will actually bother making an account means we have to have a server and that gets messy, right? And we did discuss for a while being able to just have a kind of a URL, which is, let’s say, your email address plus a random string or a password or whatever, just so it’s not guessable by email.

Frode Hegland: That to me seems great. But also we have to. Yeah, sorry, I could waffle on. Please comment on that, guys.

Brandel Zachernuk: So there’s a there’s a hard requirement for some kind of coordination mechanism between two devices. If there’s no way to distinguish my traffic from your traffic, then then hilarious and strange things happen. So. Yeah. So there needs to be at least some kind of disambiguation that allows people to, to elect to say, like this stream of computer from a this stream of data from a computer is equivalent to and can be kind of reconciled with the actions on this other computer. I it might be possible to achieve that with just a QR code, or just a sort of a 1010 digit string or phrase. And that could be a, you know, an English language word such that it’s legible. But yeah, that’s that’s the minimum level of coordination that’s required and can be done with something that doesn’t have any persistent storage, user management account, stuff like that. Yeah.

Frode Hegland: So, Andrew, you’re the captain of this ship. What to do now.

Andrew Thompson: I’m less of the captain. I’m more of like one of those guys down at the bottom with the the oars. But yeah, I can I can focus on whatever you guys decide. I share my opinion on different things, but I can really go in any route. I did mention in chat here for those who didn’t see while an account. We’ve talked about the account idea before. I think that would be great as a someday thing. It’s probably not an immediate need. Because the only real use I see for an account would be like sharing or saving your information across sessions on different headsets. And most people don’t have different headsets that they’re jumping between. They have, like, a headset they use and then like their computer or something. So as long as you’re saving locally on that headset they won’t really have to worry about having an account. Obviously an ideal world. You’ll have that someday, but I’m just saying probably not necessary for our demo, since it would take a lot of extra back end setup.

Speaker1: Right. But.

Frode Hegland: So to. Yeah. I mean, I don’t think we should have more discussions on this in the sense of all the different options. We just need to nail something down for so many months into this. The requirement is there’s a library external. Action from user means. Libraries now. And headset. That’s it. What’s the simplest, most robust, least onerous way for Andrew to implement them?

Andrew Thompson: And don’t we currently have that? That’s that’s what I’m not understanding because we currently load a library in from a, from like an external file. That seems to be what you’re talking about at a base level.

Frode Hegland: Where is that external file currently?

Andrew Thompson: Wherever you want it to be. It’s just accessed somewhere, wherever you saved it on your headset. So it’s just in storage.

Speaker1: Yeah.

Frode Hegland: But starting externally with an external application. How does an external application. Let’s say we are doing a demo on Wednesday to someone new from ACM. It’s all all kinds of exciting. And they have a specific library they want us to use, and they happen to use a macintosh. What would be the the mechanism for that person to specify to a WebEx or experience to use that particular library?

Andrew Thompson: So right now it would be you hit like download or wherever they have it saved and you download their library and then you open up the webXR and you load the library.

Speaker1: When you.

Frode Hegland: Say download, you mean in the headset download. Yeah. Okay, so that means that okay, I’m being really dumb not to pretend to be dumb, I am dumb. They’re in the headsets, they press the sphere, the bar comes up. This bar says you don’t have any libraries associated with with this environment, what do you want to click a button to find it? Is that kind of thing you mean?

Andrew Thompson: No. You wouldn’t even get inside the XR environment until you choose a library. So there would be some UI just on the browser tab that you’d have to choose a library before you can even load in the silver sphere wouldn’t be there at the start. That’s probably how we would do it right now. It’ll just load into nothing. But that’s silly.

Frode Hegland: So the silver sphere. Wouldn’t be there. You go to the URL and it says, hey mate, you havent got a library. Click here to find a library. What kind of thing is entered? It would be a URL or would it be no?

Andrew Thompson: Then it opens up your well, right now this is what it does. Then it opens up your just an explorer window that shows the files on your headset. You would choose a library and then load it, and then then the silver sphere would show up. Right now the silver sphere doesnt toggle, but I did add a notification that tells you when its fully loaded, so we don’t get that issue we had before. So does that. And then you just go, then you load it.

Frode Hegland: Does that mean then if we’re on the Apple Vision Pro? First of all, if I have a library on iCloud that’s automatically synced, I can refer to that library.

Andrew Thompson: In theory, yeah. It just needs to access the JSON file. So that could be on a server. It doesn’t have to be local.

Frode Hegland: But if it is, you know, I’m a mac guy, so I’m going to have my reader library. So in the headset I can when I get to the Safari browser thing, I can navigate to my JSON library on my iCloud and that will work right. So what would be the in?

Andrew Thompson: It shouldn’t matter where it’s saved as long as it is a JSON library.

Frode Hegland: Exactly. But on quest, what would be the equivalent way that someone would get their stuff onto the quest headset?

Andrew Thompson: Same way. Yeah. Same way. Or just just download it from anything like this. There’s so many ways people transfer files. We, we essentially right now we’re offloading it to the user like you. We trust that you know how to like move files around. Just like make sure you have it somewhere moving files.

Frode Hegland: It should feel automatic for the user of course. Right. So with the quest being a mac guy and that’s a different thing to upload images or 360 I find just. Not very useful.

Frode Hegland: But but, okay, so Fabian, you have a three, right? Quest three.

Speaker1: But how would you.

Frode Hegland: Get a library or a JSON file from your computer onto that?

Fabien Benetou: Basic drag and drop to a WebDAV server.

Speaker1: Yeah, the web.

Frode Hegland: App is useful. Okay. So shall we say. And then this question is primarily for you. Shall we say that then for the beginning we expect the quest user to understand how to use a WebDAV and stuff to get things to the head.

Fabien Benetou: No, just drag and drop. The user doesn’t care how it’s done in the back end. The drag and drop the web page. And I guess they get as a return the URL with a parameter that points then to the WebDAV that the client would use and they use. They open this on the device. You can even have like a username or like potato 2000 and then it opens the document. The final user doesn’t shouldn’t have to care what the back end does.

Dene Grigar: Would it be useful if Andrew made a little quick and dirty video demo about this process that we’re talking about? Because it seems pretty simple to me, but maybe we should develop one so that it would be on our website so people can see. And it’s more concrete.

Andrew Thompson: I’m also not saying that this is the the end goal per se. I’m saying this is what currently works. Well, we’re not that far off. Like this is how it’s working. Minus the silver sphere turning on and off. Like you can move the library from wherever you want as long as you’re physically transferring the file. I understand that you want, like in the future, to have it done automatically with like, reader and whatnot. And that’s fabulous. And that would be a great level of polish. But also it’s a level of polish that you don’t need for the, the conference. And it’s also something that’s done externally, not really by the XR software. So I don’t know if it’s as important as we’re making it out to be. Does that make sense?

Speaker1: Yeah.

Dene Grigar: What Frode is suggesting is a really great stretch goal. And that’s that’s doable. We have another year where if you look at where we’re supposed to be right now in the timeline, we’re we’re fine. You know, so rather than stress about this, we can say this can be adapted and expanded extended to do this next thing. But it’d be great to have a little demo so that we could say, here’s how you do it. You know, you’re wanting to access your ACM paper. You know, here’s step one, here’s step two, here’s step three. Future plans include being able to do this.

Frode Hegland: So yeah, absolutely. We’re on the same page on this. So we do mention the Vision Pro a few times in the Sloan thing. So I don’t mind that being the premier initial vehicle. So let me just go through what I think we’ve agreed. I have my library as a JSON file on iCloud. I put on my Vision Pro. Go to this URL. It says where is your library buddy? I click a button, navigate the file structure, find the JSON and I now have my library in the headset. So an additional question that would be every time I go into XR, can it ping this and see if there’s a new version. That’s the first thing.

Speaker1: It cannot.

Frode Hegland: So it cannot every time.

Andrew Thompson: What we might be able to do is I’ve remembered the last use document and auto load that one. So you would. But no, you can’t do that.

Brandel Zachernuk: No you can. You can retain the content. You may be able to retain the contents of the of from the last session depending on the time horizon for it. So local storage allows you to persist data. That’s that’s primarily what it’s for is same domain, same same device. Things but many browsers Chrome but also including Safari reserve the right to flush that data. So it’s not it’s not a hard persistence. It’s something that you get a certain allotment for a certain period of time. But if you are within the same week, then you would you would get everything that you stored there.

Frode Hegland: So then my question is really practical. Could we have a button that says reload last last known document. No. So you would have to navigate to it every time.

Brandel Zachernuk: Yeah, that’s not the model of the web. The the model of the web is that you’re browsing documents, and you have the ability to, to read things and maintain a state, but you can’t reach back into the file system. That’s not a safe action.

Frode Hegland: Well, hang on a second. When you browse the web, if there’s an HTML file, you get the latest version of it if you reload the page.

Speaker1: Correct.

Brandel Zachernuk: That’s the that’s the internet. That’s not your local file system.

Speaker1: Oh, okay.

Frode Hegland: So okay, so what is the minimum action within what I just described, Randall, that we can provide for the user to do this. So it’s not too faffy.

Speaker1: The.

Brandel Zachernuk: It would be to have a user account and have all of the server infrastructure set up to be able to manage the updates. I mean, from a user perspective, that is the simplest thing to do. It’s just that it’s the most onerous on on our part.

Frode Hegland: And does it really mean that when they go to the website there’s a button that says, where’s your library? They can either click a button and saying, use what you already have, which is cached in webXR, or you aren’t saying update it, and if you updated, they will have to then go to the home folder and click through to find it, and then click to the JSON and open that again.

Speaker1: Yeah.

Andrew Thompson: I know I’m cutting, but this kind of connects really well. I’ll be fast. So we’re talking about local storage, and you can’t save the file itself there, but you can save the file data there. And since it’s JSON, why can’t we just save the last JSON file that was loaded as a library and auto load that next time? Well. Like what? I don’t understand why that wouldn’t work.

Brandel Zachernuk: You also have the ability to save out from the headset. Back to the files, if you like, so that you can, you can maintain state there. Because the other thing about it is that you’re only reading a copy of that file. You’re not having a read write ability with the contents of that file on your iCloud drive or whatever filing system. So, so in order to be able to push this, persist things back, you would you would need to be able to flush that to storage to, to a more persistent storage than merely local storage. So it’s a worthwhile thing to be able to do. But in a sense, you know, if we if you properly invest in message passing structures, then then these are, if not trivial, then, then achievable. Sorry, I’ll stop cutting. Fabian knows this stuff just as well as I do.

Speaker1: Yeah.

Frode Hegland: I’ll give the mic to Fabian and Leon. But do we agree that for now, for September, it’s totally okay to make the user do this?

Speaker1: Yes. Right.

Frode Hegland: Okay, good. So let’s just do that Andrew. And we can start testing with the basic iCloud and yeah fine. I’m glad we’ve agreed and do that as quick as you can.

Andrew Thompson: Obviously I’m pretty sure this already works. This is what we have currently working. Yeah. You can just load the library in.

Speaker1: If you set up a.

Frode Hegland: User interface for that so that I can test it myself. That would be absolutely wonderful.

Andrew Thompson: Yeah, it’s it’s there. You just tap the the bottom. At the bottom left you select a document. So if you have it on the headset you can, you can get it but it has to be formatted like the library. So that’s, that’s the caveat there. And so far the only document that I know of that’s formatted that way is my ACM test document, which is all the server stuff.

Frode Hegland: That’s in a way not your job. That’s the job of the other developers. Fabian, did I ruin everything for you now? I’m sorry.

Speaker1: Okay. Did you want.

Dene Grigar: To say something?

Frode Hegland: Yeah. Leon, please.

Dene Grigar: Now we can’t hear you.

Leon Van Kammen: Yeah. It’s funny I was holding my hand up, but in the meantime, everything I was thinking was already being said, so I will. I will mute myself. Thanks.

Dene Grigar: I like to throw in something here because I’m thinking ahead. I have to write a six month report to Sloan, which is coming up in June. Right? And so what is it we’re going to say? And the thing I’ve been struggling with, you know, Frodo, with our even with our proposal, is what’s going to what’s going to cause someone to go out and buy a headset and use it. I’ve been polling people on my campus, polling faculty, staff, students trying to find out if they have a headset and why they have one. And right now, the the anecdotal information I can share is that people that like games have a quest. I don’t know anybody else except me and the rest of you that might have a apple Vision Pro right now. So the question arises based on what we said in our in our grant proposal is what’s going to cause folks to really folks like us, our academics, which is our audience, to go out and get involved in, in virtual reality. And I think the answer comes from the Apple Vision Pro with a spatial computing, where I can actually work with four other people in a space and share documents and talk about a document, and read a document together and write about that document together, that is the impetus. And I think starting with that idea. And just to finish this thought, Leon, and then I’ll turn it over to you. I think I might have mentioned this a long time ago, but years ago, when I was at Texas Woman’s University in 95, the the, the department started using listservs to disseminate the the departmental news, you know, the newsletter stuff, updates on meetings and who got awards and all that stuff. It was had been a paper document that would slip, be slipped under our door or put in our mailbox, and that’s how we kept up with information.

Dene Grigar: The department decided in the later part of the 95 to start putting it out on a listserv. At the time, there were a lot of older folks in the department who did not use the internet, did not use it for email or anything. And so the idea that my chair had was we’ll do this anyway, and this will bring them into the fold. This will force them to have to use the internet. And it did when they realized that they were missing out on something that was really viable for them, very vital and important. They turned to using the you at least using email, right? That was considered kosher for them. So I think the same same thing could be said about the Apple Vision Pro and the way I can actually tutor for students at a time, or work with a small group of people, scholars to work to actually right now writing a proposal for the triangle essay that’s coming up in 24, working with Richard Schneider, Andrew, on this would be ideal together in that space. So we have to think about the reasons why folks are going to use headsets. And I think this takes us right back to the notion of but what does it mean to share this information? Right. I can show what I’ve got on my, in my in my server, on my local files. But how are we going to get that out there like a Google Google doc where we can both do something to that doc simultaneously? So that’s where I’m thinking we’re headed with all of this, this material. Am I off base here? That makes sense to everybody.

Frode Hegland: I don’t think you’re off base, but I think that I don’t think you’re off base at all. But I think multi-user support will be quite a bit of an extra push.

Brandel Zachernuk: Well, so there are there are a couple of ways that multi-user support can work. One is in FaceTime including thankfully now spatial FaceTime. I don’t know if you’ve you all have have have tried that together but it’s it’s pretty wonderful. Wonderful. I’m afraid I cannot join you. But so that includes using a web page that, that web page at present is two dimensional. It has the ability to have WebGL on it so that you have have that and that’s a, that’s a, that’s where, you know, Apple is providing the entirety of the infrastructure there for you in terms of having people being able to relate to these documents and point and things like that. The alternative, like Florida says, is, is webXR, which is going to be expensive to coordinate, but we’ll give you all the ability to be in in space there. And alternative as well is the construction of models either in the page that’s not going to be available in visionOS one or. Yeah, but but also to be able to generate AR quicklook files that you have the ability to sort of stand as sculptural elements in the middle of those spatial FaceTime calls. So I don’t know if you’ve. So you tried sharing Safari. I don’t know if you’ve tried sharing quick, quick look files, but you can do that same thing as well. So if one person is in, in an experience, either be it in files or whatever else they can, they can export and look at these files. Those can also constitute part of something that is spatially shared between people. And it’s actually rotated and and maintained in terms of its relative position to everybody. So that’s a pretty cool thing to be able to do as well. Right.

Speaker1: Leon. Well if you’re talking Muted again yeah I.

Leon Van Kammen: Agree with both your observations. Also the the fear of missing out is a great incentive to to sort of make people curious about something and get out of their comfort zone. I also want to mention two almost invisible reasons. And maybe fear of missing out is also an invisible reason. A third one, but two other ones, which I also personally experienced is the first one. Is that a reason to to get it? Is that basically all the you know, bad reviews concerning the basics of VR are basically over. So there is no if you, for example, look at the reviews of the Vision Pro is usually usually a nitpicking about really silly issues. While actually the the full package is amazing and the same with the latest quests as well. And so that was for me and a lot of other people like, okay, this is the right time now. Now I’m, I’m like, there’s no no more talk about being you know, nausea or that kind of stuff. And another reason is also an invisible one, which is a fast publishing. I know a lot of friends and other people who have always been interested in VR, but they were not particularly interested in playing with VR through the app stores. So, so developing, submitting an app and like, it’s way too complicated for them to try something out. And since these new web or, sorry, these new VR headsets support webXR including the latest quests, the Vision Pro like, this is really like a perfect timing to go for fast publishing. And this fast publishing also maybe connects again with Denise mentioning of like, you know, if everybody everybody can publish very fast to each others, then that’s a, that’s a great reason to to jump in on the fun.

Frode Hegland: Yeah.

Fabien Benetou: Yeah. It’s it’s too quickly. Go back on the networking aspect. I’ll be a little bit egoist and egocentric there. What gets me going with discussions here and prototyping more generally? And to be a little bit pompous, is to dare go where nobody has gone before. Might be a mistake, might not be. And I think I imagine, I would hope at least that people in research. Are smart enough to say, okay, this works this way. That’s been the usual way that Newport, though, I don’t know, I want to try this. So basically that because resources are limited, the crazier stuff we can try and the stuff like drag and dropping a file where if we forget because we and other people get it already.

Speaker1: Yeah, so.

Frode Hegland: Many interesting thing here. As I’ve asked in the chat, we can’t do shareplay in XR, can we?

Speaker1: No.

Frode Hegland: So when it comes to sharing the native apps, we can use that. And that is something we’re doing. Danny and I already had a brief session where we looked at a PDF together and reader, which showed all kinds of interesting issues. Google Doc would work in a shared space if it was on Safari. Absolutely. But in terms of what we are developing here, because we are in a full immersive environment, I don’t think we can do multi-user yet. I think we should aim for that because it’s clearly useful, but I don’t think that should be something we should work on before September. I really think we should do the first use case, which is preparing to write an academic document as a single person. And now that Andrew has pointed out that I’m legally blind, I didn’t notice the button on our web page. I think we’ve come really far just by having that. Thank you for for that. Now, any more issues on this? I’m very, very happy with where we’ve gotten. Actually, before. Yes. Danny. And then another thing and then Brandon will take. Excuse me. Fabian will take over.

Dene Grigar: We have two years. Proto wrote me this morning. A little bit like, we gotta get this done. It’s like, calm down. Look at the timeline. We’re way ahead of the timeline. We got another year and a half. We’re only year and a not even more than a year and a half. We’re now in month four with five. Yeah, we’re barely in month five yet. And month five is a week away.

Frode Hegland: We do, but we’re still, until I’ve been able to write from reader a JSON that Andrew System will accept, we haven’t been able to upload the user’s library yet, so that’s going to have to be a real priority. So I’m very glad what we’ve got today. Now what.

Dene Grigar: Let me just respond to that. So writing is in year two. But as I wrote in the case studies, we writing is not separate from reading. So some some ability to write. But we don’t have to really have anything for writing until year two.

Speaker1: Yeah. No.

Frode Hegland: Yeah yeah yeah. Absolutely. That’s absolutely the way it is. But what I’m saying is multi-user support of course will be later as we grow in. But also writing. Yes. But just to have the users own documents isn’t working yet. And what? Andrew, I know you’re there working away in the background, but I guess I should get my guys to look at your JSON again, see if they can write a compatible JSON. And then we had the beginning of a of a library that can be uploaded. So that’s. Very, very good.

Andrew Thompson: Yeah, we do have some of the older prototypes. I’ve since removed it when we added the library, but the older prototypes have another button that loads an individual document so you can still load your individual documents. It’s just the library is now the better version. So that’s replaced it.

Frode Hegland: Oh, okay. Can you please along to the right of upload library, have a button that says upload documents.

Andrew Thompson: Do you want to allow that? Like bypass the library? We could if you’d like.

Frode Hegland: Okay, let’s not do that now. That’s fine. Let’s let’s keep going on this. I just wanted to say another Brandon is here as well. The PDF that I sent in an email and put on slack of the summary of the meetings, please have a look at it. I used Claude I, which I find is absolutely freaking amazing because you can set many, many prompts in one go. So when you look at each meeting, all of those things, names, what do people disagree on and agree on? All of that is done in one single prompt. It’s crazy. So what I’d really like to know from you guys. Number one, is this useful? Rather than having a very, very long transcript? And should we tweak the prompt to ask different questions? And if you do think it’s useful, what I plan to do is make a new document of this after every meeting when I’ve done this. So you can have a look at your own stuff and just see if there’s anything egregiously wrong, and then I’ll just fix it.

Speaker1: No, I think.

Frode Hegland: It comes close to what we’ve been talking about for years of having really, really useful summaries. It happens to be in PDF, but because I’m doing it in author, I can also export to HTML if anybody wants to do exciting views to it, of course. I personally find it amazing. It’ll show. It shows things like there was disagreement on this issue. The team seems to have learned this. It’s like for real. What? Anyway? All right. Any other things before Fabian goes crazy? Okay, I guess Danny is back in a second.

Brandel Zachernuk: If you haven’t played with the the generation of USD files, it’s it’s genuinely fun. It the the process where a web page has the ability to to put those things in in space. Like my, my birthday cake and and my, my books and things like that. It’s really neat because you you can just you can build stuff and then and then see it and then pass it around as part of a workflow.

Speaker1: Well, you put your foot.

Frode Hegland: In it now, because this comes into the issue of being able to have an expandable document. So USD obviously relates to that.

Speaker1: The.

Frode Hegland: Once we’ve gone through a few times the workflow of being able to upload the library, we really need to discuss how we want the HTML versions of the documents to be rendered, of course, and that comes into three dimensional stuff like you’re talking about Randall. And it would just be absolutely amazing to have that initially encoded in the visual meta. So when you open the document, it does the USD thing right there natively. Yeah. So yeah, it’s super cool. Yeah. Okay. Fabian, over to you.

Fabien Benetou: So I’ll try the usual. So sharing my screen might not work, so just keep me posted if there is an issue. Let me know if you see anything. See nothing.

Dene Grigar: Not yet.

Speaker1: Okay.

Dene Grigar: We see a screen that says quest three.

Fabien Benetou: Okay, then you see something.

Dene Grigar: Black right now.

Fabien Benetou: Okay. And you see me in the quest now with my hands moving.

Dene Grigar: I see you in the window of our, you know, talking heads. But not in that. Not in that screen that you’re trying to present. It’s not feeding into the system. There we go. That that worked. Whatever you just did, that it went away. That. That’s it, that’s it.

Fabien Benetou: We’re going to try. Do you see anything?

Dene Grigar: It went away. You had it and then it went away.

Frode Hegland: Black screen with you are sharing your entire screen. Stop sharing.

Speaker1: Yeah.

Fabien Benetou: It’s it’s flickering, isn’t it? No. Okay. It’s fine. I’ll. I’ll show some little recordings I had before. Not live, but you’ll get the idea just the same. The first one. I don’t remember if I showed you this. I showed some of you, but basically the idea is that, of course, post-it notes you get. And but there are two kinds of post-it notes. The blue one with codes and the yellow one without just text. The usual ones. I use post-it notes because that’s what people are used to. And the color means, though, that they behave differently. So if you see what happens, there is the yellow one is pushed away because it’s not code. So it’s not friendly with code. And code snap next to each other. The idea behind this is that and I discussed also a little bit with Leon about this is that I think manipulating things with the idea of a grammar on how an object just like grammar in linguistic you can have different types of tokens or words in a sentence like verb versus adjective versus noun, etc., and they have a certain type and thus behavior. And you cannot have some certain rules that makes you cannot make a certain sentence in English that nobody would then understand. Well, I think it applies to pretty much any domain of knowledge like chemistry, like programing. And I think if you can give, if you can make that grammar, I don’t want to say tangible but at least manipulable or see the result of it.

Fabien Benetou: I think that helps a lot. So some of you know, I’m quite a fan of scratch and how you can have blocks that have a shape and then you program moving those blocks around. I’m now quite intimately convinced that most domain of knowledge would benefit from being represented like this, like a grammar tokens and rules or entities and rules for learning, at the very least for manipulating. I’m not sure, but at least to learn. So I generalized this a little bit more with those cubes. And now what they do, they snap next to each other and they stack. So when you put the on the top of another cube, it’s going to move next to the furthest away where it can avoid overlapping each other. I won’t show the code, but what I liked about it is it’s actually quite simple, like maybe five lines of code. It’s it’s not like a complex things to do, but I think in term of possibility, again, manipulating objects in space here, they’re just cubes. But you can imagine manipulating entity of different domain of knowledge. And they have their own set of rules, namely that they don’t snap to post-it notes and the post-it note don’t snap on the cubes. And then I started to this. I did not finished yet, but move on the like.

Frode Hegland: There’s too much intro. I’m getting really, really lost. Can you show us and then maybe repeat what you’re saying?

Fabien Benetou: I’m showing you just this. I’m not going to show from the head. We’re not. You’re not.

Brandel Zachernuk: Showing anything. Fabian.

Fabien Benetou: Oh, okay. I thought.

Speaker1: You were previewing.

Frode Hegland: What you’re going to show.

Fabien Benetou: No. Okay, so I tried to show from the headset point of view, and I don’t know if it’s the networking, but it’s like, most of the time. Okay. Well, yeah. That was. I really love sharing screen on this zoom. So do you see? Do you see my screen?

Speaker1: Yes. Yes. Thank you.

Fabien Benetou: I think that’s a bit better, I hope now. So you see the post-it notes then?

Dene Grigar: That’s nice. Thank you for that.

Fabien Benetou: And you see the yellow one doesn’t snap to the blue one, it snaps away from it. So it’s kind of funny that I explain all this without the visual, because my whole point is being able to see those abstract concepts like a grammar with entities and rules and being able to manipulate it is what makes it understandable. Of course, if you’re already familiar with those kind of concepts I mentioned you, maybe you understood despite my poor explanation, but maybe not. And I think now that you see it, at least I hope. And even more, if you were to jump in the headset and move those things around, being post-it notes or whatever, I hope you would get it, and you get a literal sense of it a lot more. Now, like I was saying, the same, but with 3D objects like post-it notes or 3D objects, but a bit less exciting. The cubes. And like I was saying, they they stack next to each other, not on top of each other. And if you put one next to one that already is there, it’s going to snap at the bottom, let’s say, of that stack. And they are not they don’t interact with the post-it notes. So the idea also is that you can have different entities or objects that have their own grammars, and they don’t have to apply to everything in that space. Does it make more sense now with the visual?

Speaker1: Yes.

Dene Grigar: And I love the idea of how you express the, the the cognition part of it. Right.

Fabien Benetou: Thanks. And I think actually the, the, the thing I did not show yet visually and manipulable is I think those are those rules between when you basically release an object and where it goes. Those rules themselves. Now they’re just code. And again, simple one takes a couple of lines, but those, those lines of code should be arrows or blocks or lines or something that you can grab so that you can design or modify the grammar. So how the different entities relate to each other and behaves when they do a thing, like when you release them, for example. But that’s just theoretical or an idea. A couple of days ago I did not implement that yet, but I am quite convinced manipulating the grammar itself helps. Also, one of the motivation behind this is I don’t know if some of you learned basic arithmetics as a kid with this, but I learned with those cubes. Build bit of wood when I was a kid. And then when you put ten together, you can exchange with the teacher or a kid, another kid. The the line. And then when you have ten lines, you have those plates. And when you have ten plates, you have a big cube with a thousand tiny cubes in it, which as a, as a young kid, a thousand is a big number. But being able to manipulate them, giving enough affordance to those abstract concepts really helped me. Like I don’t remember if it really helped me understand, but I have fond memory of such things.

Fabien Benetou: So that’s a little bit the hope there to to be able to do the same thing, but to generalize it, to have more entities with more grammars, and that the rules here, for example, ten become a line and then you can’t break it without an action, I think is yeah, I’m going to explore that more. And yes, the three GS arrows for the grid that you see at the bottom and the arrows there for more fine motor skills, like if you need to snap, for example, you want to see where you snap to with a grid. And just yesterday or this morning being able to either potentially snap on the corners and you specify which corner you want to snap or edit the, the entity itself while in VR. So basically editing the content in VR, not outside of it. And I think that’s about it. There are a couple other things that I would want to have to show you today, but I doesn’t work, which is basically those new couple of commands where I can. Attached to self. So if I were to rotate the camera, everything will rotate relative to me. And when I release it would go back there. So a little bit like mentioned earlier about having the main screen in front of us and then entities or post-it notes, etc. that move but only for a short amount of time. And then back to the main thing being reading, for example, in the headset. What are.

Speaker1: Yeah, that was phenomenal.

Frode Hegland: Sorry. I’m on YouTube here finding a Ted Nelson video. No. You’re okay. Fine. It was it was nice to see it up there. It’s really, really amazing. And I look forward to being able to go in and out of your space from these other spaces. I could imagine a few obvious use cases where you have an author represented as a shape, and each side can mean something, so you have one side of this is what the author has written. The other side is who has cited this author’s paper. You know, like an obvious thing there. I think that once we build the obvious ones, because as a design principle, I don’t think we should be afraid of the obvious. That’s what I tell my students anyway. If something is obvious, do it. It might be brilliant, it might be silly, but you’ve done it. So now you can free your brain to go to the next right. So if we were to build some obvious grammars in here, at least conceptually, just us writing documents I could imagine getting to a point of not only having scientific experiments embedded as three dimensional blocks in a paper, which will be insane, but the paper itself will have many elements.

Frode Hegland: One thing we talked about a long time ago, which is what Vint Cerf calls computational text, and a simple version of that would be next Wednesday. I plan to blah, blah, and then Wednesday happens. It’ll say next Wednesday I planned to blah, blah, blah. So it changes conceptually, right? These are very I mean, contextual is what I’m trying to say. But to really have the mind space, I think in June, that’s a very important thing. In June, as you know, I’m hosting a future tech social. No presentations, none of that. We’re just going to spend the weekend chatting. And for those who can’t be here, we will try to have a zoom on at all times. But I can imagine that sitting around the fireplace at 2:00 in the morning will be the time when we can develop for this type of environment. What you’re presenting here, Fabian, is so. Real and exciting. I would say that it’s you were talking about scratch earlier. You’re talking about like, scratch, right? One of the versions of.

Fabien Benetou: I don’t remember. I think to me at least, I associate with the MIT kindergarten and

Frode Hegland: Squeak and scratch and

Speaker1: What’s.

Frode Hegland: Yeah, it’s. Yeah. Yeah, probably it is. Thank you for coming up for that. What you’re doing here is that in not 3D but any day you want. And sorry for taking so long talking, but I’ll explain why I think this is so exciting. First time I met Vint Cerf was actually for breakfast. There was an interview for the Doug Engelbart thing we’re doing, and you know, he was being very humble. And I said, Vint, what you’ve invented is not a highway super, you know, super highway like they talked about in America a long time ago. You’ve invented a means through which anybody can make a highway any time they need it. That’s incredible. So what you’ve done here, Fabian, I see that because these are not three dimensional objects, only they’re computational objects. So you have now developed the infrastructure through which we can build as many dimensional structures as we want. So I just think that is it’s going to take a huge amount of time for us to experiment and figure out where this can go over. Leon, please take over.

Leon Van Kammen: Thank you. Yeah. I just want to take this opportunity to say to Fabiana, this is really mind blowing. I really like this. And yeah, it’s worth an applause. The the term symbolic manipulation and then in 3D is I think we’ve talked about it many times, and it’s really impressive to sort of see it in front of us. What I was also thinking is that you know, because I really of course, I’m a proponent of URLs and URIs that I would really love to be able to basically add some metadata to a building block and basically attach a URL to there. Another thing. What why this is blowing my mind is that we’ve been talking about building a virtual library and many times and we have noticed that, you know for many people, this means something else. And with the library also comes very, you know, many people have different ideas of how the workflow of that virtual library should be, basically the grammar. And for example, I was really amazed when I heard Dean talk about her workflow and spatially, like, this was completely new for me.

Leon Van Kammen: And it’s something I’m, I’m completely not used to. Basically, this is also a certain grammar. And I was thinking that if you you know, continue with this work and you keep things very generic so that people can sort of build or create their own grammars, then. Yeah, it almost reminds me a bit of the beginning of Roblox, which is more like a children’s virtual library with not much rules, but they create their own rules by interacting with the with the environment, just by building simple objects out of these blocks. Like there is no the creators of the environment had no sort of specific rules of what you should do in that environment. They just had a certain grammar for how you can build things. And this this really gives me the same kind of vibes that if you don’t, if you don’t sort of opinionate this into a certain Trello kind of application, but keep it really like flexible, like fluid grammar. You can decide for yourself how things should interact with each. So I think there could be a really powerful starting point. Yeah.

Dene Grigar: I threw in here in the chat that we need a shared grammar, else we communicate with no one but ourselves. And that goes back to Frodo’s concept of reading and writing. That was in that document he shared. I forgot where it went. But reading references, you know that that document that I read over the weekend, this morning or over the weekend. But the idea is that if I have my own grammar and you have your own grammar and you have your own grammar, then it’s really hard for us to put a sentence together that we can understand, right? And so there is personal and I think we all recognize there’s personal language. There’s family language. There’s community language. And I would say even cosmic language, right? I think of this concept of ego. Okay. Oikos, Polis and cosmos. And we’re moving, always traversing among these four entities. And so the thing about being married is that you have your own family. You know, words, things that are triggers, things that you can say and no one else in the room understands. And then you get out to another place, you know outside your home, and there’s a language that’s there that you have to be part of in order to communicate. So I think that’s the thing to think about is as we’re building these personal libraries and then thinking about sharing these personal libraries. Share with 1 or 2 people in our field, but then sharing in a larger community, that’s that’s that’s a different way of thinking about reading and writing.

Frode Hegland: So one of the people have been to the future of text is David Bellos, who wrote, is that a fish in your ear, a reference to Douglas Adams? And it’s exactly what you’re saying, Danny. He points out that even within a family, people will use language slightly differently. And before you had countries, peoples, the language would change as you kept moving. So and that’s really, really important. Now, this little button that I have on on the iPhone on top here, when I was by the way, how cool is that? Screensaver background thing. So cool. Anyway, when we were in China, we would use that a lot. I would speak a phrase that were translated and it would be Chinese. So translation even within this may become necessary. But I do strongly agree with Deanie that to strive for a common grammar, to strive for that is really, really important. But still we may need some kind of translations at some points.

Dene Grigar: And I think about the visual, the visual language, and we talk a lot about that in the lab as we’re building interactive spaces with UX, UI, like where should a button go? You know, what does this button look like as opposed to this button? And, you know, trying to come up with some consistency so that users can know how to traverse a document, a series of documents, and in the sense of this project, the consistency of design throughout. And so this goes back once again to what you’re showing with the building blocks of grammar and recognizing that all the things that are red represent this, I think the genius of Apple. Was actually giving us that visual language in a computing computer environment that is transcended the computer environment.

Speaker1: That’s what Guy.

Frode Hegland: Kawasaki keeps saying. Absolutely. Absolutely.

Speaker1: So. The first.

Frode Hegland: Thing we’re going to start doing now is test.

Speaker1: Okay.

Frode Hegland: I’m just for a moment, backtracking a little bit. So Andrew said he had upload document in addition to upload library for a while. I’m beginning to think we should reinstate the upload document as well, at least for testing, so that if we have a well-formed HTML from ACM, we should be able to just pick one and have it go into the system when we start designing what it would look like.

Speaker1: Once we.

Frode Hegland: Start doing that, then we can decide how to section it, how to cut it. And that then may very well relate to what Fabian is doing. If we can, for the sake of argument. I’m now picturing this as kind of robots or proteins in our body where this thing comes in and you have this HTML thing comes in and there are known sections. Here’s a section called references. So there is a thing waiting for it said, oh, your references, you’re going to eat you up, and I’m going to process you in a specific way so that you are computationally able to be in a 3D environment. So that might mean that, like Ted Nelson zig zag in zig zag more than Xanadu. You can choose to to have this rendered and viewed and very different ways to see the connections and relationships and so on. Yeah. Leon.

Leon Van Kammen: So when you were just mentioning that, I was thinking that Like in contrast to scratch a programing language, you could also in this situation you could even also grab something and put it into a box, which would basically mean this is going to be processed in a certain way, let’s say the reference box or I or the index box make this an index item. And without any programing, it’s just a symbol like a box. And the grabbing they are and putting it into a box, it can just be a symbol for a very technical operation. I think this, this can remove a lot of obstacles, which are usually. So for example, with scratch, I do think it’s still it’s it’s a bit text heavy perhaps, like you really need to sort of it’s it’s still a bit like programing, but I think because of these new affordances in VR, it could be even this barrier could be even lower. So, yeah, it’s not really a question, but this is my few.

Frode Hegland: It’s very exciting to No realities are exciting, like setting on a very long flight with a headset having to use it. Part of it I was using it with no trackpad and I’m sorry for knowledge work. You got to have a mouse or trackpad. The whole vision thing and pinching is absolutely ridiculous, which is a useful thing to to find out. So with what you presented here, Fabian the potential is insane. And I agree very much with Leon that we should try to we. There you go. Of taking ownership, of working with you already having it as open as possible. So, for instance, you can do these kinds of transformations and allow people to add the, the code of how these should work and what the grammar should be. You know, even if they do require programing. Because the kind of thing that I can absolutely see that we’ve talked about, I think, but this kind of enables us you open up a new document and I’ll do the quote marks for document, whatever the heck that will be in a few years. You don’t just open it up plain, you open it up with many, many different aspects of it analyzed for you. You know, one of the things you want to know is a basic search of. Has it been redacted? Excuse me. Not redacted. Retracted. You know, that’s a simple thing, but you may very well quickly want to know the key concepts. And the author. How does it fit in what your colleagues are talking about at the moment. And these are the types of things that can really benefit from the active dimensionality that you’re giving it. I can almost imagine you know, again, proteins on the body or washing line of elements of these things hanging out and see how they relate. Maybe we should think of knowledge as being protein folding problems now.

Speaker1: About that could be fun.

Fabien Benetou: So to to to go a bit on that. One of the things I’ve learned as a kid was a model of knowledge where you have different basically layers, where you have roughly physics at the bottom, chemistry on top, biology on top, and then eventually sociology, politics etc., etc.. I think a model from the 60s or 70s. I criticize I mean, it helped me basically to, to put different parts of knowledge in, in, in sections and how they might relate to each other also. And yes, those the example that you mentioned, frauds, for example, are a thing that I had in mind specifically. And one of the classic demo that I guess most of us have tried here is like, you take an different molecules, and when you release them close to each other, they snap, and it’s like H2O. You have like, a water bottle that appear or something like this. I think it’s a it’s a pretty If you did not have that, don’t try. I think it’s it’s interesting and I think it’s fun. There are even a couple of games like this when you forgot the name, but when you, when you drop you can build a civilization from rocks, you start from rocks, and then you have more and more complex structures. And that’s kind of the point.

Fabien Benetou: There is like, you have different grammars all the different layers of or most of the different layers of knowledge are represented this way with different atoms or tokens or whatever, and how they relate to each other. So the goal is indeed to say, hey, let’s play with actual symbols of what’s an atom or chemistry, etc.. And I think that what I find exciting, let’s say, with this kind of perspective, is it seems to match not surely not all, but at least most domains of knowledge. So I think the in term of exploration, I would say it’s potentially quite fruitful. And to go back to be a bit more pragmatic is if it is and align with exploration related to slow. And it’s like you take a PDF of a published article and then you can try to see which of such items we can represent, let’s say starting with chemistry, because that’s the one that’s most obvious. But others too. I think that that could be a bit also like Brendel has been putting some prototyping prototype or papers in VR. It’s not just the, the, the paper itself is like within a context and eventually with some tools to explore it. So I think that’s, that’s to me that would be proper augmentation and thus properly exciting.

Speaker1: Absolutely.

Frode Hegland: So, Denny, putting you on the spot. Could you imagine authoring a paper based on these kinds of blocks? If the way you told the blocks what to be was something that you and I could deal with, and we wouldn’t have to be programmers.

Dene Grigar: I don’t know if I could write that way. But I think that way. I mean, the way you think. I mean, you don’t know. How your brain is working cognitively as you’re putting ideas together. But imagine it’s building blocks like that. But to express my ideas and to put it on paper. Words are the outcome of that conceptualization. Sentence structure, phrases, grammar is all are all expressed through words. That makes sense.

Frode Hegland: It does. And it makes me think of what Mark Anderson said. When I started doing the map and author a few years ago, and he said that it looks like it can be very useful at the start of a project and maybe at the end of a project, but maybe not so much during. So when you say you think that way, you may not author that way.

Speaker1: I think that makes a.

Frode Hegland: Lot of sense.

Dene Grigar: Well, reading and writing are cognitive processes, and it’s not something that is natural. I mean, from what I’ve read about reading theories, it’s not natural to us. It’s something that we have that we can do and have learned, taught ourselves to do. I mean, if you have a child that’s never been taught to read, they don’t read. You have to sustain that reading experience with them. And so you know, it is it is something that’s outside of the norm for the human brain, although it’s we have the capacity for it.

Frode Hegland: What I found.

Speaker1: Yeah. You know that.

Frode Hegland: The question of what reading is, though, is of course, also a very interesting question. The many aspects of it. And we talked about grammar a lot. Grammar is, of course key to reading key to text because without grammar. You know, you can do one thing at a time. Basically.

Dene Grigar: Can I say something about grammar that’s interesting? And Rob, and those of you that speak many languages know that the grammar. Kind of defines the, the, the cultural concepts. And I was studying Greek. It was so fascinating to me that you could have the verb first and the verb contains the subject, right. The subject was not separate unless you made it separate. So the action and the and the agent were. Connected, whereas in English we have. She baked a cake. You know, baking is separate from the from the person who does the act, does the action. And the concept of, of of pulling those two concepts two together says a lot about the ancient Greek culture. The way language developed for them, thinking about word order and French, you know, address and then the color as opposed to red dress. So the emphasis on the object as opposed to its description. And it’s just it’s just fascinating to me. So the grammar grammar is a construct and it’s one we can learn and there’s different constructs. And if you train your brain to think about the different constructs, you you can grasp different grammars. But most Americans do not know another language, so they only have one construct, and even then they don’t do well with it.

Frode Hegland: In Kevin Kelly’s Out of Control, which I hope you read. It’s quite brilliant. Old now anyway. He has this little anecdote of a professor dropping a pencil on a table, and he’s saying the problem with Western thinking is that we believe the pencil is more real than its motion.

Speaker1: So the table.

Dene Grigar: Or the table?

Speaker1: Well.

Frode Hegland: Really the motion and the interaction being the thing that matters, you know, like how in Asian grammars and language it tends to be more about the relationship rather than the things. So the way this is encoded in grammar, DNA is absolutely important and different and worth not.

Speaker1: Ignoring.

Frode Hegland: Yeah. Anyway.

Speaker1: Similar

Brandel Zachernuk: Bill Buxton says why is Canada like a state transition diagram? And they’re both unduly influenced by the states?

Speaker1: That’s funny. That’s quite.

Frode Hegland: Funny. I tried, by the way, for I.

Dene Grigar: Won’t be saying that when I’m on. I will not repeat that in Montreal tomorrow. That would not make me popular.

Frode Hegland: I tried to, for the little graphic sphere, one of the for a logo for the events. I tried to do a reflection of a globe in there, and it always defaults to North America. And I thought, that’s fine, we’re going to be in Seattle, Washington. So excuse me, Vancouver, Washington. So I tried to tell it to prompt it, to highlight that, to center that on the map. It just wouldn’t just goes to the standard America as the world. So that’s the training data that’s been given.

Speaker1: Yeah, yeah.

Brandel Zachernuk: But yeah, his point is that that transitions in a state transition diagram are as meaningful as the states themselves. That that the, the idea, like you say, of the motion of the pencil, the actions that are taken by the objects are significant. And that’s a really useful lens to be able to apply and recognize the work to unwind in our default views.

Speaker1: Yeah, that makes sense.

Frode Hegland: That makes sense. So Right. So, yeah. How many days are you going to north of the border for?

Dene Grigar: I leave tomorrow at six in the morning and I’ll be back Saturday afternoon. Go.

Speaker1: So that won’t.

Dene Grigar: Be here on Wednesday at all.

Speaker1: Right. You won’t.

Frode Hegland: Be.

Frode Hegland: Yeah. No that’s.

Speaker1: Fine.

Frode Hegland: I hope that this Wednesday will be very

Speaker1: Production focused to.

Frode Hegland: Get these things we discussed working. So we should be simple. Also, there’s been a little bit of random discussion, primarily because of Adam, to try to have a more European day time wise, because, you know, he’s got a couple of hundred kids, it seems. So for him to have time between 4 and 6. Sorry for him. It’s five and seven in the evening. It’s just not really feasible.

Speaker1: So.

Frode Hegland: If we can try to do something earlier where obviously everybody’s invited, but it may be a bit harder to attend. You know, we should think about that.

Dene Grigar: What time would that be? Because Andrew’s, you know, we’re on West Coast time, as is Robin Brandel. So what time are you suggesting? I think, and also, I just want to go on record saying that Andrew is putting in, what, two hours a week, four hours a week right now on meetings. He’s working behind the scenes while we’re talking right now. But when you call on him, he stops working to talk. Oh my God. And and that’s and I don’t want to eat up too many of his hours with meetings.

Speaker1: No, no, no.

Frode Hegland: Of course not. No, these meetings are more. I see Adam’s contribution when he’s able to join us again. Twofold one, he’s got quite deep technical knowledge, like some of you guys here to help Andrew really optimize things and get things together. And but I see mostly the contribution of doing just another weird thing so that we end up with lots of little prototypes. So it would be mostly kind of a design day, a visual design day. You know, we just mess around with things. But it would probably be. Yeah, quite early in the morning for you guys.

Dene Grigar: Well, Andrew, I don’t want to. I mean, 8:00 in the morning is kind of early for us anyway. So it let’s you and I talk about times off this. Set, but I’m going to be I’m going to I’m not going to be communicating with anybody starting tonight like at 10:00. So whatever we decide to do, we probably ought to do it tonight because I’m not I’m not going to be available. This conference is very I’m very I’m very involved. Just put it that way.

Speaker1: Yeah.

Frode Hegland: Well we’re not going to I don’t think we’re going to do anything this week. So it’s not super urgent. But we’ll have to see what Adam also what works for him and just try to do different things. Because having relatively different topics like today, we’re supposed to be general, but it was very much about implementation, which I think is I personally like at the moment we need to do that, but to also have more design days. And also you had one book club. I thought that was a really good idea. I wasn’t there, obviously, but I did read as much of that as I could. And it’s a very, very different dimension to this. It’s not just implement. I mean, half of what I read of Benedict was annoying. Half of it was brilliant, and that’s the way it should be, right?

Dene Grigar: Why was it annoying? I’m sorry. Why was it annoying?

Frode Hegland: Because a lot of it is pure philosophizing about spaces that at the time didn’t exist. So now it’s quite different. It’s like, hang on, but then it can get a little navel gazy. But that is philosophy, and that’s fine, but sometimes it can get a bit okay. The reality is a bit different.

Speaker1: We don’t all have to.

Frode Hegland: Agree with everyone’s perspective all the time to be inspired by it. And I do think we should invite him to be part of our work, obviously. I think Dean is a bit shocked.

Dene Grigar: Well, I mean, you and I are both academics. I mean, we do theory, we do philosophy. That’s part of what we that’s part of what we learn in academe. Right? So I don’t issue it. I actually embrace it. So I just I find it interesting to see what people said this space was going to be like. And then thinking about, you know, what did they predict? You know, and then that means we can look at what’s happening today and potentially make predictions for the future. And this is one of the things about the lab that I think is interesting is students come in and see. The Macintosh is arranged in chronological order from 77 onward. Right. And the question I leave with and the question I leave them with when they see the 2006 2010 flat screens is okay. What’s next?

Speaker1: Yeah.

Frode Hegland: Exactly. That’s the whole point. It is well worth it. Which is why I think it would be great if we could have a slot for it on a more regular basis. I’m just saying, reading it, some of it can be. Inspirational. Some of it is annoying and I think that’s good. It’s provocative. It’s the whole point. It’s not just treading down what we think. So I wasn’t in any way saying it. Yeah, that’s why I brought it up. It’s a good thing to do. So we have not too much time left winding up today. It’s. I would say we made progress today. I’m going to produce an AI summary of this meeting as well. See if you agree with what it says. That. Also, Brundle, particularly for you, considering the kind of prototypes you developed before we really got into this. If you want the summaries in a different format so you can play around with them. Of course, that would be fun.

Speaker1: I mean. Yeah.

Fabien Benetou: I’m resisting the urge to to share some of the ideas of prototype I have. During the meeting so that I don’t feel too much pressure to actually make it happen. But I’ll just take the occasion to thank everyone for the conversation.

Speaker1: Yeah.

Brandel Zachernuk: Thank you for that. That construction of the.

Speaker1: Block based system that’s really, really interesting.

Frode Hegland: I really like.

Brandel Zachernuk: The idea of being able to kind of combine and separate. That’s something that I’ve been playing with. I finally finished Too Much to Know. Look at Mark Anderson’s recommendation and just trying to think about what we might be able to do to a corpus of text in terms of the active manipulation of it, pulling it apart and kind of turning different components of it into references for themselves. It’s also on a group chat with Andy Matuszak about doing that. He sent a picture of some set of note cards that he has had arranged on his table and made a suggestion to him that he set up a thermal printer attached to an iPhone app to be able to print out SF symbols onto post-it notes so that they can be big, bold kind of reference to to the their concepts rather than having to kind of draw them himself. And yeah, I think that there are a lot about level of detail and level of level of granularity where you can reconcile course actions with fine ones within a digitally mated environment that I’m really excited by. It was also over Rob at At Dana Staff’s house yesterday and she was showing me a, not a, not even a galley, but a paper book where she’d she’s assembling a children’s book that I believe is greenlit. And she’s trying very much to be allowed to do her own illustrations and and have has all of these sort of components glued together or taped together.

Brandel Zachernuk: But really remarkable to to think about the way, like each of those sections has a life of their own. And the one of the things I was telling her about is the fact that diagrams and illustrations all have to live together. They live throughout a book, but they also need to have symmetry and balance in terms of what level of detail you have to the thing on page ten versus the page one on page 25. So drawing a illustrations for an entire book is not just a matter of drawing pictures, but making sure they hold together like people fall short of that. But yeah, having the ability to have a view specs of looking at just the pictures, just the the asides to make sure that the language is all correct. It’s a, it’s a remarkably important thing to be able to do, which is related to writing at a, at a greater scale than merely working on a specific paragraph. But that that was a really interesting thing to see, because I we’ve been friends with for, with, with the staff for ten years now. And so she’s seen and humored my sort of my gesticulations about spatial information for, for the better part of a decade. But now I think it’s starting to make sense to her. And now that she’s, she’s sort of composing and like compiling documents of this scale on, on her own terms. And so, yeah, I hope we get somewhere with it.

Dene Grigar: Can I mention something? When I was working in the motion tracking lab that I had I was building a environment where you can put together poetry while you’re moving or dancing through the space. Right. And you’re holding this tracker in your hand, and the infrared light is, you know, hitting it, and you’re able to evoke things, right? So it’s developing grammar in that space, like where do you put the verbs? Where do you put the nouns? Where do you put the adjectives so that they’re intuitive. So somebody that’s just, you know, you hand someone in the audience a tracker and say, make a poem, where would they naturally go to find a verb? Right? In American language. In English language. Right. And so I have this whole map where I mapped out, I think it was 18. No, 2032 words. Across the space in a three dimensional way, so that all the verbs were located in one place, one strata, and the nouns the other, so that you can actually move fluidly through the space and make entire sentences. And so that reminds me a lot of what you showed late on in terms of the blocks, the concept blocks. Because I could take that entire map and concept, block it just like you did. And I did color code the map, so there is a correlation. I’ll dig that out of my archives and send it to you. It might be interesting for you to see it.

Frode Hegland: So that brings up a question I think we’ve answered, but I think it’s. Oh Fabian, please go. Sorry.

Fabien Benetou: I’ll go a super quick one thing. Also Brenda, at some point you mentioned I’m not sure that’s a word, but like the reference reference ability or the ability to go from one concept and point to another, basically. And one one thing I did just before leaving or engineering school was to note my ideas. And I start with a plain file and just like ID after another. And then rather quickly, I had an idea that reference to the previous one who won before this. And I started to have line numbers. And then the idea is that you’ve been numbered. And as soon as I started to give the ability to reference another idea, I think it exploded in term of quantity, for sure. Quality, I would say so, just based on the the new number of ideas that were able to reference from each other. So I think it’s a extremely important and precious concept to be able to, to refer to each other. One, one thing also that led me to to this and it’s to play with those entities and grammars so that they led to each other. Is this so I made a stool. But what’s interesting is I I did the terrible welding behind it, so I went to a workshop just five minutes by bike here, and it’s a big thing. It’s heavy. I can stand on it, I can. It’s mostly for my feet so I can rest while watching three body problem and whatever else on my computer.

Fabien Benetou: But it’s it’s it’s the thing I’ve made so I’m proud of it, regardless of how imperfect it is. Why do I mention it, though, is because during the workshop the first part of the workshop was how to cut different pieces together and the theory behind it. And honestly, I was not excited. I’ll put it this way. The second part of the workshop, a couple of weeks after, was just welding two pieces together, and as soon as I did this, like the third step was like, wow, I can do those two together, but I can put also those two together and those and those and those as soon as you can compose from those basic building blocks like you, your mind starts running and it’s it’s super empowering. And of course, what I haven’t done there yet in Excel is design the pieces that will weld together so that you stretch the tube, you put them together. And there is also a certain grammar. You cannot put certain tube a certain way. You cannot will the certain way because physically speaking it’s not going to hold. So I think yeah, that was inspired by this very pragmatic event. But I think the loop to go back from thinking about those grammars, but also designing the tools to make more such physical things. Yeah. Super exciting. And I have to run. Very sorry for this, but I gotta go. Take care. Thanks again.

Speaker1: I’m gonna go to.

Dene Grigar: I’m going to go to the other lab, so. Bye, everybody.

Speaker7: Yeah, I have to drop to have a great time. See you on Wednesday.

Speaker1: Bye, guys.

Frode Hegland: And for those of us who remain, just really briefly we are now going to be working on. Well, yeah. They’ve gone. I was thinking maybe Andrew would be here because I’m wondering if it’s libraries, articles and so on. Anyway, it looks like we’re making some progress. What do you guys think? Robin Brundle.

Brandel Zachernuk: Yeah. I’m excited. I mean, I think

Speaker1: The the the.

Brandel Zachernuk: There are still really big. Conceptual gaps for in the public consciousness about what XR, what about what spatial computing can do to information processing. But I think that this is a really fantastic beachhead into making it clear that there are ways of doing information, of reading and navigating and manipulating that, that are more than just sort of a fancy tech demo and having it informed by the past of text and hypertext. And, and people who are familiar with the idea of information and in an academic context rather than just, you know. Visual designers or, or or advertising people is going to be really valuable for, for setting up a basis to reason about the actual merits for people doing real work with this stuff. So yeah, very grateful.

Speaker1: Yeah, it’s.

Frode Hegland: I’m feeling the weight of Of the reality of what’s at stake here quite, quite severely. Which is good and bad. But also thinking back to the Macintosh. And this is why I want Adam to join us again. He’s been busy, so we have to bend over backwards, because we need a few of you guys who are technically brilliant.

Speaker1: You know, look.

Frode Hegland: At, you know, I grew up dreaming about the Mac. The Mac, to me, changed my world. Because if it was a PC, I wouldn’t be in this industry. The elegance of the Mac, to me was amazing. And as you know, I see the Vision Pro as being a new Macintosh and we have to develop for it to show. And how did the you know, the joke goes? No one signed the inside of the vision. They signed the inside of the Macintosh. Right? And it makes me think of Andy Caps and all these people that we saw, the three of us on the 40th anniversary. It’s amazing. We need those people in our group. You know, we need to do what needs to be done for them to be happy. Because, you know, Andrew is great, but, you know, he’s young. He’s got a specific perspective. Someone like Fabian, as we saw today. And Adam, they do very different stuff and we have to invest in that. So I think we’re doing the right thing. And You know.

Speaker1: It’s good. I hope you can.

Frode Hegland: Both consider coming over here in June for future Tech Social weekend. I know it’s not likely, but if you can be excellent. Of course, this.

Speaker1: Weekend.

Frode Hegland: We haven’t decided yet. I think, Fabian said. Let me just see his texts. I think he just said anything but the last weekend, not the last weekend. Otherwise should be fine. Yeah.

Speaker1: I returned from France on the 12th of June, so. Okay, so we’ll.

Frode Hegland: Try to make it early.

Speaker1: Before that. Before that a tunnel up might Good, good. All right. Excellent.

Frode Hegland: Okay. Well, thank you guys. And yeah, Wednesday is just two days away. Bye for now.

Leave a comment

Your email address will not be published. Required fields are marked *