12 February 2024

Frode Hegland: Hey. Or as they may say in your home state. Howdy, partner. You know. Can you hear me?

Dene Grigar: Morning. Morning.

Frode Hegland: Did you say you got two packages delivered for me?

Dene Grigar: Yes, something I think.

Frode Hegland: If it’s two, then please send them. Because.

Dene Grigar: This one.

Frode Hegland: What’s. Sorry. What’s that?

Dene Grigar: Well, I didn’t open it up. I mean, it’s came from methyl.

Frode Hegland: Yeah. No, I mean, if it’s the whole point is there was two items. One is the light seal and one is the lenses.

Dene Grigar: That one. Be too.

Frode Hegland: Fantastic.

Dene Grigar: Okay.

Frode Hegland: Desperately looking forward to tests. And you’re getting yours on Thursday, which is a year away at this rate. Hello, Andrew.

Dene Grigar: Good morning. Andrew. How are you?

Speaker3: I am doing pretty well. A little sore. I went on a big hike over the weekend.

Dene Grigar: Where’d you go? Where’d you go?

Speaker3: What to Silver Falls?

Dene Grigar: Where is that located?

Speaker3: Where is it located? It’s like 2.5 hours away.

Dene Grigar: Did you go with Simone or just a bunch of other people?

Speaker3: I went with my roommates, my family. And then some of their girlfriends. Yeah. Someone was working.

Dene Grigar: Wow. So you didn’t watch the Super Bowl you’re not into. I didn’t watch it. I didn’t know if you were into it or not.

Speaker3: I didn’t watch it either. No.

Frode Hegland: It’s a bit worrying. The ads for the Super Bowl. I watched some of them. You know, my background is advertising, so. So I can’t. Hi, Keith. Yeah, I was just. I was just saying we’re doing the intro chit chat. Of course, we’re referring to the Super Bowl, you know, not really watching it, but I did see the ads. Some of them, you know, having a background in advertising. And it was really quite scary. They all the little writing at the bottom of the ads, you know, do not try do not attempt professional this and that. This time was really quite extreme. It’s like a car driving on a normal road. Do not attempt professional driver. You’re selling a car, right? It was very odd. Anyway.

Dene Grigar: The New York Times had them all and they ranked them. So I watched them. I just finished, I just finished watching them.

Ken Perlin: And now I just want to learn how to do Christopher Walken.

Dene Grigar: I know. I love him. I’ve always had a thing for him since since he was in in Annie Hall as the crazy brother.

Frode Hegland: But did you see that at the end when the car was driving? Do not attempt car on professional.

Ken Perlin: Oh, that’s what you’re talking about, I see. Unless you’re unless you’re Christopher Walken, then you could do.

Dene Grigar: Yeah. Hey, did you did you ever see him on Saturday Night Live with his little mini skit called the. The what was the cowbell? No, the Continentals. And you never see him. You just see his hands and you hear his voice, and he’s trying to seduce a woman to come into the apartment and be with him. And she. It’s like Pepe Le Pew, right? It was his voice. It was just. It’s the funniest thing. If you ever can find it on YouTube, it’s probably the best thing he’s ever done. The Continentals, he’s a sleazy, sleazy kind of loungey guy, you know, with Ascot and.

Frode Hegland: He’s. And he would do that well.

Dene Grigar: Of course, deer Hunter loved him. And deer Hunter.

Frode Hegland: Oh, yeah. Wow. That’s a while ago.

Ken Perlin: I loved it. And you notice they had a reference to Weapon of Choice. Did you catch that? Yeah. Yeah.

Frode Hegland: No, I didn’t catch that. I will have to watch it again.

Ken Perlin: Yeah. Usher has his little cameo in At the Top of the stairs. It’s really, really brilliant. Yes. Oh, yeah.

Dene Grigar: Calling my house.

Frode Hegland: That makes sense. I saw it early because the TV series Emily in Paris, her best friend, is one of the voices in the commercial. So she tweeted it or put it.

Ken Perlin: Oh, nice.

Frode Hegland: That’s clever. Right? Get the celebrities involved and they’ll spread the word. Think it’s a clever thing.

Ken Perlin: Very clever.

Frode Hegland: And so yeah, we’re waiting for a few more people, but I think there will be a few more people. Adam is on holiday, so that’s why he’s all back. Randy and Hussein is here for the first time. He’s become a bit addicted to the notion of XR, particularly VR, almost towards Google Glass, I would say. So he’s going to be talking about that in a few minutes. That’ll be cool. He’s already watched some of our videos, so he roughly knows who people are, right or not.

Hussain Panjvani: Nice to meet you all.

Ken Perlin: Nice to meet you.

Frode Hegland: So before we get into our free wheeling discussion today and the topic that is going to bring up here’s Mark. Good timing. Mark and Rob. So yeah. Good timing. Mark, I was just going to bring up some of the points you brought up in your in your, in your commentary, kind of the meta points. Hi, Rob. So Mark pointed out in the document he posted on Basekamp, I expect none of you had had time to read it today. Of course, it’s early for most of you, and that’s fine. But he made the point that Andrew and us, when there is a new version to test, we really need to remember to write down what to test. And that’s going forward. I don’t think we need to go back to what we have. But otherwise it becomes too easy to do a critique of the whole environment, which can get a bit tiring and useless. So I thought that was a very useful thing. And then two other points I’d like to make today is. Oh. Yeah. Here’s an excellent. There’s Fabian. So the thing worked. There has to be pinged today. So I sent ping, and you sent pong back, and I wasn’t sure what that meant, but. Excellent. You’re here. Yeah. So, Fabian, the only thing I just said, that Mark made a very good point today for the testing for those who go to future text blah, blah and do the testing. We who put things there need to better write what to test. So it’s not, you know, you go into environment, you have no clue what’s going on. So so that was absolutely fair enough.

Frode Hegland: I also suggest and I’d like to see what you think comments for specific things, maybe just put them as comments on the pages so you don’t have to write a whole document. What is what makes more sense? Where would you guys like to write comments for specific tests? Okay. Okay, fine. We’ll write them in WordPress. That was easy. And then there’s two other points. One of them is and Mark and I had a discussion on this topic earlier, so that’s not my cough. We need to do something more exciting. You know, we’re spending the first few minutes talking about Sloan. Today is not about Sloan, but just a few minutes. Because it’s worth it. Because of our thinking. And. The thing that really struck me over the last few days. First, Keith here, we had a good session where we kind of scared each other what we need to do. I finally managed to get my own reader software to work in vision under a test flight, and it’s pretty awful. Some code things that need to do about scaling, but actually reading it is fine. So reading in XHR is a solved problem. You know, there’s nothing more for us to do there in the sense of just looking at flat text on a thing. It’s done. What we need to do, and I think many people have voiced stress about this. We need to make that reading something completely new. Right. It cannot just be a flat document. Now, Andrew has put together some flat document things because we’re getting kind of the foundations in place.

Frode Hegland: But I really hope that all of us can think of how can we explode a document? How can we make it interactive? This is the time to think about that, because here is my final point. We are not really trying to augment an academic reading an XLR in and of itself. That’s too big. What we’re trying to do is to have someone put on a headset and spend under five minutes in it, and get convinced that there’s something special here. Because this will be shown at academic conferences and stuff. So most of what we’re doing, unless someone disagrees and we’ll have a discussion, is theater. It’s giving an impression that this is not normal. This is a whole new way of reading in order to do that, because Mark and I also had a cough cough discussion today about flat versus dimensional documents. Of course, they need to be dimensional. Question is how to make that possible. So I got in touch with Bob Stein who did Voyager books and all of that stuff. He was big with interactive books in the 90s. I’m going to have a meeting with him on Wednesday. Just a little intro chat for this. He’s a good friend, but I want to give him our context and then I hope he can join us on a Monday where we can kind of interrogate his 90s ideas and see how they fit in our time. So Fabian, over to you, but I just wanted to say. How can we make something cool, interactive and impressive in five minutes? That’s something to think about for Wednesday. Fabian, please.

Fabien Bentou: So I think some, some wrong way to do it is one that I might try because I’m showcasing to some colleagues at the European Parliament. Yeah. What reading and managing document in Excel is and ironically enough, I don’t know. That kind of shocked me. They want to be overwhelmed. They want to have, like, like, documents and stuff everywhere and floating and whatnot. So honestly, I don’t think it’s the right way to feel overwhelmed, but I’m going to try it. So basically I ask them to give me everything like links and videos and rich content, etc. they have on a topic on a they have like fact sheets about what’s yeah, summary of a certain topic, let’s say and then we’re going to put everything related around that document and then we just see what happens basically. So I really don’t think it’s the right way. But then I can tell you in a couple of days a week if it’s actually, let’s say, the naive approach of. Throwing things around basically, and seeing what stick might actually be. And why do I mention it, even though I don’t believe in it is because I don’t really know otherwise. So at least it’s a way to try. It might be a way also to emulate kind of what we already do when we go to I don’t know, the library. And when you have the whiteboard and, and literally magnet or stick things around. So let’s say it’s very naive. I don’t like the motivation because I don’t think feeling overwhelmed is necessarily such. I find it strange, let’s say but yeah, that’s that’s what we’re going to try. Maybe it makes sense. And again, my, my whole metric, let’s say, is learning. It doesn’t really matter if we reach a certain goal as long as we learn on the way. And by trying this, I believe we’re going to learn something. What it is. Is it going to be enough? I don’t know, but yeah, it’s going to be a.

Frode Hegland: All I can say is Deanie.

Dene Grigar: I think before we call something naive, we may want to try it, and we may want to work with academics to see how they actually want to work, rather than what we think they should be doing. So let me caution us all not to begin putting I don’t know, ascribing adjectives to what academics do, but rather, let’s talk to all of us that are academics and find out how we want to read in a new way. I mean, I feel incredibly limited by the way we’re reading now. What I want is something different, but I do want things in front of me scattered around so I can find things at a whim without having to ruffle through a stack of books to find it. There’s got to be a better way. And as far as the emotional attraction, I you know, I think there is emotional attraction in the content. And the actual kind of cinematic experience. So theater maybe. Yeah, but I think more cinema is what is what interests me. And I think that’s probably Bob Stein’s stance to. Bob came and spent some time with me in my lab right before the pandemic and gave us all of his collection. Right, because we have a full collection of the Voyager CDs. And the thing that he really stressed in that work was to give that kind of beyond the book experience. So be interesting to see what he says today about that, that stance. Thanks.

Frode Hegland: It’s interesting how we’re using different words for what is probably similar. So I spent another morning working with the working in the Vision Pro at my favorite coffee shop, where Keith has also been with the vision and a couple of observations. Number one, people actually pay attention to this headset compared to others, which is interesting, but I have the wrong prescription, which was saying is helping me with thank you. And Dean is going to send me the lenses. Thank you. But the point is, I had a lot of stuff just to do that thing, you know, all over the place. And I felt there was a little bit messy. But I am hoping that Alan will join us. And on Wednesday, he will actually want to run a full session on user story mapping to find exactly what academics want. No question about that. But sure, there was something else was going to say. But Keith, what would you.

Keith Martin: Just a quick reflection on what you were saying Fabian, about people wanting drama or sort of to be overwhelmed. Maybe it’s not so much. Well, I mean, maybe it’s not so much the overwhelmedness. It’s it is just the drama of and excitement of perhaps maybe you can explode documents out so you can just go, okay, see everything and then have it collapse down. So you’re not in this chaotic environment while you’re trying to actually focus and get on, but you can just say boom. Well, and that’s dramatic and potentially useful if you can start working with sort of spatial organization of stuff. Anyway, just just crossed my mind that it’s maybe it’s the drama the wow factor rather than the actual overwhelmed factor.

Frode Hegland: Yeah, yeah. Fabian. Go ahead. I have something adjacent.

Fabien Bentou: Yes. No. It’s one of my favorite moments, actually, is when I have to, let’s say, dig in the topic. And I am going to have. I don’t know, 20 papers to read because I gathered them or I. I collected them, basically, and it’s too much. I know I can’t read them all. So I’m going to either. Organize. Basically, I’m going to toss them on the floor. Literally, I’m going to have them a circle kind of around me, and I’m going to find for links or patterns or ways that this one, for example, needs to be understood before that other one, etc. find some structure behind it, and it is indeed a bit dramatic. It’s maybe a bit thrilling or exciting. I would just argue it’s more than overwhelming. I think the way they formulated it when I had the discussion. Felt that the wow effect or the the way that was kind of like marketing or sales. Which, yeah, it kind of it triggered me, but but yeah, like, I don’t when I say naive before, I don’t mean it in a pejorative way. It’s naive in the sense that they don’t know. I don’t know we don’t really have a better path, as far as I know. So. Sure, let’s give it a go. And the dramatic or the overwhelming aspect. If it helps to, let’s say, tackle the tasks that initially felt too challenging, and then we can go through some literal motions to organize that information. There is probably something there.

Frode Hegland: Denny, I see you talking to Hussein and Keith. Hussein will be introduced in a few minutes properly, because he has a bit of an exciting topic for us. Keith is my friend. Okay, old friend, let’s use the term. He used to be the Mac user technical editor. That’s how we met. We also were teaching together at UCL. No LQ. Excuse me. Now, one thing I’ve noticed when I put this thing on people’s heads is probably the same as you all have noticed, is even with the Vision Pro, which needs to have a guest mode in all these things, people don’t just put it on and look around and say wow, there are solution is really high. This is amazing. This looks great. Nobody does that. There is a bit of wow for that. Everybody wants to interact with it. The, you know, how can I take this? How can I open that? So I think for our own work, when someone puts the headset in for the Sloan project to read or write or whatever it is, they got to be able to get their hands dirty immediately. That’s something we all agree on, right? It can’t just be a nice view.

Frode Hegland: Even if it’s dimensional. It’s got to be. How do I lift it? How do I turn it? How do I stretch it? How do I do this right? Right. I am highly on. So that’s kind of important. Now on that, I have posted on the base camp and also in slack a really simple and this was supposed to inspire you guys to say how horrible and do something better. Andrew’s going to show it on on Wednesday. A little kind of a control panel. You touch your wrist and you get the control panel. It’s really, really basic. Keith and I spent a couple of hours on it this weekend with just the idea that over time you learn a gesture where your points not at all saying it’s the be all and end all of controlling something. But we’ve got to have some ways to say global things. How should this be organized? How should that be done? So when you have a look at it on in that on base camp or Slack, please understand that it’s a provocation. Deny. And then I think we can go over to the other topic.

Dene Grigar: I do want to mention that when we are working with PDFs as academics, I’m not talking about beautiful leather bound books, right? But just picking up an article to read. I don’t pick up an article and go, wow, what a great what a great PDF. I don’t I don’t remark about the, the, the physical aspects of a PDF, so I’m imagining someone’s going to put on the headset and they’re not going to go, wow, they’re going about the, the, just the way they’re looking at of this thing. It’s going to be like, what can I do with this thing? It’s the actual thing that causes us to want to do something. So the impetus is what we can do, what we’re going to be able to access, how we’re going to be able to access it. So I think it’s a it’s it’s probably spot on that people aren’t saying, oh, wow, I can see so much better with my headset than I came with the meta three. But it’s more like, this is what I can do that I can’t do with the book. And that’s the question we’re asking, like, what can I do? How can I read better in this environment? And by better I mean like get information that we need, share that information, you know, do something with that information, disseminate it in some way. Right. That’s the key thing.

Speaker8: Yeah.

Frode Hegland: 100,000,000%, Danny. I also have Fabian’s writing exciting affordances. Exactly. That’s a good way of looking at it. And you know, in terms of how we’re going to make it possible, we’re not going to start a new document format. We’re not going to do any of these things. It has to be based on, on PDF because that’s what it is. But of course, one way we can make them more interactive is visual media. And we really need to have a discussion, maybe offline, you know, ideas going back and forth how we do that. One of the basic things we’ve talked about is when you go through your own document or you’re an editor and there is a graph, you, you know, draw a rectangle around it and you say, here’s a URL to an interactive version of that. So when the user is in a multi-dimensional space, they can literally pull it up and move it across as an object. These are things we talk about on and off. It’s probably time that we settle on something like that, that we like, and we start testing. And I would like to provoke the community to say that maybe what we do is have a stack of plain PDFs, but also our own book for this year. Maybe we should find a way for all the contributors to be able to add this kind of stuff themselves. Right, so we can make it more amazingness. And that means everybody can take ownership over what amazingness they want in their own article. So yeah. Mark. And then we go on to non Wednesday topic. Sorry and thank you Mark.

Mark Anderson: I mean, I don’t mind going later. It’s sort of about this, but I don’t want it if somebody’s got some. Actually got something to present. Why don’t we do that and we can circle back later?

Frode Hegland: No, no, no. It’s great. Good timing, Mark, because after this, we’re going to talk.

Mark Anderson: Okay. Just quickly, in terms of this excitement thing or one of the things is, I don’t know, I totally take the point about, you know, we’re using PDFs and I’m very focused on that, but I’m not sure that we’re exploring PDFs from the right perspective. I mean, I think things like PDP what is it? Pdf.js or whatever, there are some things that will crack out, something we talk quite a lot in the past about I affordances have we actually had a crack, for instance. The thing that doesn’t make sense to me is the idea that authors are going to do all this extra work to mark up their documents, all the other projects I’m involved. It’s hard enough to get them to do the things that’s supposed to do at the moment. So yes, if they’re baked into the tools, but realistically post hoc markup of documents I suspect won’t happen. Now whether an I can do that. But if one of the reasons that the the HTML document I just put in the chat earlier for those who have slack and I put in our slack, which was trying to get to this thing about, well, what are academics actually doing with PDFs? You know, with papers which normally come in PDF form these days. Is that actually the last thing you do is soup to nuts read through. You may do that at the end, especially if it’s something really interesting, but a lot of it is triage. So, for instance, if we’re able to put up, you know, when we put a document into our, our VR space that the key affordances we’ve got are things like the abstract, the conclusions you know, so key bits you can go to and indeed, we might even blank out the other bits because you don’t want to see those. What you want to see, the bits you want to see. So anyway that’s, that’s really my where my mind’s going on that.

Speaker8: Yeah.

Frode Hegland: Okay. So. I don’t know who will do the markup, so to speak, adding these extra things. That’s going to be an interesting thing. Of course, people add their own images and so on today for an academic document. But, you know, this is clearly not the same thing. And you’re pointing out a real, real world problem. So we would have to figure out a way to actually make it baked into the tool, even if the tool is word. Let’s say you know, there are obviously, you know, anyway, you know, you know, what I think about visual media that can solve part of it. You can say what page certain things are, but the real thing, that would be really great, guys. Whether you can go to the Wednesday, whether you feel involved in Wednesday or not, if you have some ideas for how you think magic stuff can happen in a document, please don’t keep it a secret. This is the really, really good time because Andrew’s made some great progress with the basic interactions. So now we can start looking at prioritizing and figuring out

Speaker8: So

Frode Hegland: Sorry, Andrew, I just read your message on the side. I got confused, thought it was new. Yeah. So? So that’s a big thing. And

Frode Hegland: Yeah, we could go on about this forever, but we’re all happy. Okay, so Hussein has hair today because he had what’s in the beginning sounded like a completely absurd and crazy and stupid idea, and therefore it isn’t. And when I went to see him the other day because I had problems with my prescription for the vision, blah, blah, blah him and his brother helped me with my tests and all of these good things, and I brought the vision and he pointed to it and says he basically wants to make one, which was like what level of insane are you? And then, as he will discuss now, he has some ideas with a company he knows. He’s also looked at what Fabian has presented here. And on that note, I’d like to hand it over. Hussein, please. Enlighten us. Inspire us.

Hussain Panjvani: Hi, guys. Pleasure to speak to you all. And to listen to you all. I’m really new to all of this. I’m very keen to learn. It’s really, really interesting. I thought I’d just start with a little bit about me because I’m completely random to you guys. So I was born in the UK, and my parents were born in East Africa, in Tanzania. My wife is from Hyderabad in India. My best friend, my life coach, and my consultant is my nine year old son. His name is Ian. I’m a business owner with my family, and I was born into the world of optics. My my father bought his first practice the day after I was born. In general, as a person, I feel like I’m a creative. I enjoy evolving the process of doing things, and I love problem solving. Now we get to this, I feel, I feel now it’s very apparent how AR and VR would change our lives. If developed correctly, I think would be in a position where we could throw away every device that we own to access all of our needs in a virtual space. Currently the industry. Well, in 2023 the industry was worth 25 billion and it’s forecasted by 2028 to be almost three times that. And so it developed correctly. The average person wouldn’t need their desktop or their laptop or their television.

Hussain Panjvani: I mean, you might need like a projector for like sharing, sharing sharing, watching a movie with someone. But you wouldn’t need it for general. You wouldn’t need your games console because it would all be included in that. The tablets, the smartphones, the watches, they would all be maybe used very differently. We wouldn’t be as reliant on these devices. So when the iPhone was released on my 17th birthday, it changed the way that we we function, ingrained itself into our nature. And the first thing that you go to grab in the morning is your iPhone. The reason you’re grabbing it in the morning, the reason that you’ve woken up, is because your alarm has woken you up from your iPhone. Then the iPad was then released as a medium between the computer and the phone, and now we have these on our wrists which do everything for us as well. But everything that we need it to do. Now Apple have entered the market of VR. They’ve priced it at $3,500, plus any clip ons and add ons you may need. With a 2 to 3 hour battery life that you can use while it’s plugged in. It just means that you’re going to be very static in it. You’re not going to be able to use it all the time.

Hussain Panjvani: Mainstream. But I mean, it’s for it’s for you. It’s for you guys for it’s for people like Freud. It’s for people to develop and you guys to all play with and develop a new world for people like me to then go and use. I mean, I’m really interested in making something though, but not making something, but really interested in the idea of this going mainstream. It affects my industry. I mean, this real estate that this takes up on my face right now is affected. So, I mean, it affects me directly in that way. So what I understand in in the glasses sense is. What constitutes to make a good frame. What constitutes to make a comfortable frame? How is the end user going to be happy? This real estate around your ears is really, really sensitive. If not, if not looked after correctly. This all around the nose. How close is it to your face for for it not to fog up. How to make your glasses work for you? I wouldn’t say I know a frame inside out, but I’m very, very, very good with the frame. And also know what the customer would want. But when I’m looking at the tech and marrying the tech and the hardware together, the software and the hardware together, I feel like the customer it all it all depends on what the customer would want and what the customer needs.

Hussain Panjvani: For them to, for them to buy it, it’s got to be it’s got to be worth it to them. That’s worth it price wise, plus many different facets. I think it needs to encompass and do everything that their current set up can do, but better. I mean, it’s an amazing world where, I mean, I tried on the device for a couple of minutes and. Exactly. He’s exactly right. Like I didn’t say. Oh, wow, look at the resolution. I’m used to all of that. Looking at that on my iPad or iPhone or on my MacBook. I’m used to that resolution. I’m used to the look of the apps. I wasn’t expecting anything less. Yeah, I’m expecting great usability. I did struggle a little bit using it. I think that’s more because it’s it was, it was, it was made for I mean, it was bespoke to to to Freud’s eyes. But I mean, it’s the first version number one needs to be worth it and do everything that they set up can do. They also need to look good and feel comfortable, otherwise they’re not going to wear it. The first iPhone was heavier, it was clunkier, and then they made it sleeker and then made it up titanium.

Hussain Panjvani: And now it’s really nice and sleek and it’s an accessory to people. It’s not a it’s not it’s not an interference using it. It’s part of them. Other factors are they need to feel. Then they need to be ethically and morally correct. I mean, this world of VR. Where your eyes are. Your eyes are hidden. It’s got two really powerful screens in there so that you can use it. It’s got amazing tech in there where they can pick up your hands with all the cameras that it’s got in it. But if I’m wearing it at home and doing my work or whatever I’m doing, and then my son comes home from school you need that. I feel like for it to go mainstream, you need that sort of eye contact. Which is what you’re somehow missing in this. I think that’s where then would then come in rather than, well, alongside VR. Another thing is they need to feel familiar as well. It needs to feel like it’s something that. Something that doesn’t feel abnormal to them, to use something that doesn’t look like it’s in the future in the movie, something that just feels right. And one idea when I first started thinking about this was to have.

Speaker8: Because.

Hussain Panjvani: So sunglass looking device. Something similar to this. It’s got one screen at the front, and then you can clip off the screen and then clip it back on, maybe using some sort of magnet. Never thought this, but by the time by the time, something like that is sort of developed. I’m not sure how far we are from using correctly. So then it might not need that sort of use. But I mean, it’s an interesting concept, but I thought it was an interesting concept. But it just depends on how far ahead we have to look before before we can have a clear lens in there and do it properly. So the current idea that I’m sitting on right now is at the moment that the Vision Pro, it’s it’s amazing for when you’re sitting still. Maybe not so much walking around. I’ve seen some YouTube videos of people wearing them for many, many hours, and then they feel the discomfort on their face. But I mean, no one’s going to realistically wear the device for 24 hours in a row. The idea for right now would be to have a nice sleek frame, but maybe not as much tech involved in it.

Hussain Panjvani: Maybe something like what the Kindle is to the iPad. Maybe something that we would use just as a reader. It’s something that’s familiar as well. People put on their reading glasses to read. So you could put on whatever, you name it to read. From what I’ve been watching of you, what you guys are sort of working on and creating, it’s it’s it’s really I mean, the, the what we can do with it is amazing. And for you to be able to put on your reader and walk into your library and find your space and pick up your book. It’s just, it’s a, it’s a, it’s something that I think would work really well. But I’m really new to all of this. It’s really interesting listening. Just listening to you guys, just watching you guys and hearing what you guys have to say just really interest me. And This is just my idea of how we could interplay the hardware and the software together. And yeah, that’s that’s that’s that’s about where I am with my idea at the moment.

Speaker8: Thank you.

Frode Hegland: You didn’t mention your Italian company that you work with. Where you got access to understanding some of the really cool hardware. Yeah. Very Tron. Like shades. Do you have them here? Can you need to see this design? What do you think?

Hussain Panjvani: I mean, this is one of their concept pieces. I went to the factory in Italy last week. Beautiful factory or fully sustainable? 90% of the factory is run by hydropower, and 10% of the factory is run by solar power. It’s in the it’s in the beautiful Alps in a little town called Lego. Everything is sustainably done. All of their paint is recycled, their metal is all Japanese sourced. And it’s very, very, very good product. We work with a number of different frame suppliers, a number of different brands. And I’ve never been. I’ve liked products. Our new product could make me money. But when I’ve really looked at product, I can always sort of find a fault in it. With this product, I just felt a connection. Just even before they invited me out to the factory and before I’d even placed an order I just found. This product was just the processes that went into building this product. Were were amazing. And then that was just concreted by when I actually went there and saw how it was put together. The innovation in, in making a great product is, was something exciting to see. It was nice to see you see that no corners are cut. And good people factory. Good people looked after me. Well. Showed me a good time. So. Yeah. It would be really exciting to to see what you guys have to say about this.

Frode Hegland: Yeah. I feel like this discussion is very much So Allen, who was not here today, one of the things he talked talks about is paper. You know, we’re doing very much the whole space. And all of that is most of our discussion and then paper. And now you’re coming here and talking about basically Google Glass in a way, right where you see the world most of the time and you have an overlay, you know, not a new idea, but there’s tech now that can make it happen, and it can happen in a, in a really interesting way. So I wonder particularly if Albion and Randall, since you’ve talked about this, if you have any. Commentary.

Speaker8: About lighter glasses.

Frode Hegland: Lighter glasses.

Ken Perlin: Lighter.

Frode Hegland: Lighter. Lighter. Leader would also be fun.

Speaker8: Yes. Well, I remember Adam.

Frode Hegland: I don’t know if Adam can hear us right now because he’s mostly on holiday, but. I remember one of his tests was this was with the quest, but it was just having text in front of you, and you can toggle having it locked in space or having it locked to your vision so you can just lie back and keep reading your book. So I think there is a great future of text in really, really simple displays too. It doesn’t have to be. As you know, it doesn’t have to be a a facehugger from alien all the time. But if I’ll be on mute. You want to reshow what you showed earlier and talk us through it again. Sure, sure. My glasses. So I know what it is.

Fabien Bentou: So it’s I don’t have glasses, sadly. But I 3D printed some, so. Of course you did on so this is a monocle that you would normally put on your glasses, like, so and I think it does quite, quite a few, but basically it’s a screen that, that’s that part of there. I think you might see the brown part there. So the screen is here, and then you have a transparent port so you can look at the person in front, you can look below and below you might see also the the cut basically or the angle there. So it’s transparent or translucent and it reflects the content of the screen. So the screen I forgot the resolution but it’s 400 pixel by 300, something like this or 600 by three. I put the link in the, in the chat so that you can see it. So in my opinion for this, the two interesting things are well, the three one first, it looks a little bit cyberpunk. Like it’s hard to go there in the street and not look a little bit funny. So I think it prompts some interesting conversation. Number two, it works. It’s not an actual like I don’t know, cosplay thing. Like, you can really use it and you can buy today. Just order it online and you receive it and it works. That’s it. So I think that’s really cool. And third, last but not least, it’s open hardware and open source software. So it is not, let’s say, comparable in terms of performance to a Vision Pro like the capability of this in terms of compute.

Fabien Bentou: It’s like this, like some cheap small chip set for testing and debugging. It also means everything smart. Let’s say it’s delegated to the companion phone or even the cloud if you want to. So it’s literally just a display, but it allows it. It’s really super light. And I would argue it’s relatively cheap, like I think 300 bucks. So that again, if somebody steals it from me, I’d be pissed, but not at the end of the world. So I think in terms of a platform for testing by having all the attributes relatively affordable light I don’t want to say obnoxious, but obviously visible and all open hardware. I think it’s really cool. It’s just a starting point. And the the community around it I think is quite open to discussion, suggestion, how to build for it around it. I, I don’t want it’s also it’s just is like, it’s it’s the difference, let’s say the gap between projecting ourselves too far and having something that is, it’s still thick like, it’s still, I don’t know, maybe eight millimeters thick. So it’s not like a glasses thickness. But I think in term of pragmatically projecting ourselves in the future by making something out of it rather than waiting for something to happen, I find it quite exciting. So that’s that’s one path and the other that I don’t have it because I lent it to a friend much more.

Fabien Bentou: In this kind of things, five form factor is the North Star XR which is I’ll put it in the link. Yeah. It’s good. It’s it’s very different because you need to really own it, but it’s also open hardware, open source software. So as a platform for tinkering, basically the limit is you and your imagination. And yes, that you can also add on whatever you need for it. So I my whole point with this, sorry for the length of this is initially it might look like doing headset. It’s definitely not like, for the random Joe in their little basement and whatnot. And I would argue it is. I’m not saying it’s trivial, and I’m not saying anybody could do that in five minutes, but it’s also not that complicated because we have tools at home or offices or in fab labs and whatnot around you, like laser cutting, like 3D printer, like designing your new PCB and soldering stuff on it. That’s it. Just feasible now. So I want to think, and I would argue that it’s a good to explore this kind of things, to be able not just to think about software and the interaction which were already doing. And I obviously think it’s really worthwhile, but to say, okay, and the hardware we have is not enough. We don’t want to wait for yet another release from a random platform that might have design decisions very different from ours. So let’s build it.

Speaker8: It would be really nice.

Frode Hegland: To hear some usage scenarios of this. I can imagine someone skiing or doing another sport and getting, you know, arrow directions or where to go.

Speaker8: And I’m sure.

Frode Hegland: There are lots of other interesting ones. Education. Of course, you’re sitting on a bus and you’re trying to learn something. Flashcards would fit in this thing. I mean, it could be the small. You know what? It’s 640 by 400, right? What was the resolution of the first Mac? Keith? Quick, quick quick quick. First Mac nine inch screen. Oh, no, you’re looking it up. I thought you would know that. But imagine HyperCard running in this thing.

Keith Martin: Well, it was six 4480, wasn’t it?

Frode Hegland: Maybe. Danny does a thumbs up and Danny knows.

Keith Martin: 6400 was the PowerBook the first color PowerBook.

Speaker8: No.

Frode Hegland: So okay, so if you had some really cool other sunglasses or normal glasses, you had a little not necessarily physical, but a flip action or something to activate some text and or images walking down the street or wherever. What would you use it for? Text messages would be one. Right. Again, please.

Speaker8: Oh you’re muted of.

Ken Perlin: Course was the first person back in what was it 2017 or something that when North started pushing their version of Google Glass. In New York City, I went in for a test drive. And, you know, and it looks it looks kind of like it looks like a pair of sunglasses. And it’s the same general idea you have, the one, you know, you have the thing that sort of shows you text and there’s it’s voice activated. So you can ask Google, show me directions, or where’s the restaurant or this or that. And my my reaction to it was more than negative. It was alarmed. And I’ll give you my reason. I live in Manhattan and in Manhattan. You have people walking around with phones, and they walk right into the street looking at their phones, and there are cars, and the people in the cars know that the people walking across the street don’t see them because they’re staring down at their phones, which saves millions of lives every day. And what happened when I was trying this thing was this nice young woman who was putting it on, and she was right there. And she said, can you see it? And it was even though it wasn’t made for me, I was using the storm model. I could see it. It was fine. And I saw, you know, there’s directions and there’s the restaurant menu and all that. And it really came home to me in that moment. That vision is not like audio. You know, I’ve got my little, you know, Bose Sports, which is audio. And I can be listening to this and someone’s talking to me and I’m listening to music and it’s just fine.

Ken Perlin: I can hear someone’s conversation and follow the conversation while the music’s playing, because that’s how audio works. But I realize that while I was reading that menu and that Google directions, I couldn’t focus on the woman who was right there in front of me, because that’s not how vision works. And so if you had people wearing these glasses, they’d step off the curb and they’d hit be hit by a car and they would die. And and so I realized one of the reasons we need to either, you know, level up to these things or to these things, you know, or in the future they’ll look like this, but they’ll be they’ll be more like, this is is that we need these things to actually be in the world, stuck on a surface or in a place, because we need to be still paying attention to the world around us if we’re going to walk out into the world. Otherwise we’re going to be in big trouble. Because people won’t know that we don’t see them, that we’re basically kind of blind in a way, while we’re reading our little menu. So that and I understand that Google Glass is fantastic for what it was originally made for, you know, warehouses and factories. And you look at the QR code and you see where the box goes, and it’s perfect for that. But I just don’t believe that that whole generation of stuff should is really appropriate for social interaction in the real world, for for physiological reasons, for the reasons, the way our brain works.

Dene Grigar: I agree with you and I think I would add one more thing to that is that we need to be able to to not isolate ourselves. I mean, I can’t speak for other countries, but here in the US we’re so isolated, right? And we’re coming more. My students are so isolated in the work that they’re doing with their, you know, their gaming or their their social media spaces, although they’re reaching out socially, they’re still not getting out. They’re not leaving the house, they’re not getting out. So my concern is, you know, once we put the headset on, how are we going to be blocking out the rest of the world, or how are we going to reconnect to the world? So I think social interactions are going to be very important. And I do want to say thumbs up about the security issues, because there are a lot of women that I that are my, you know, my age or academics that are my, my position that don’t want to put a headset on because they don’t want to be blinded, they don’t want to not see what’s going on around them. It’s kind of like when we run and, you know, when I jog outside, I don’t wear ear earplugs when I’m running because I don’t I want to hear what comes up behind me. I have been grabbed in the streets before. Right. And I don’t want to miss that, that sound. And I think the same thing. We don’t want our senses to be disrupted. We want to be able to be in the in the moment in the world. Right? No matter what we’re what system we’re using.

Frode Hegland: Thanks. So Ted Nelson told me once when he was living in New York City that you know, massive trucks driving down the road. He noticed that if he gave the my contacts, they would be really aggressive and kind of try to run him over, so to speak. But if he made sure they didn’t see him looking, they would break and be careful, which is kind of an interesting psychological thing. I do not want to put my life on. So there’s some other aspects to this. The other day poor me, I had to carry two bags of stuff, two cakes for someone, an elderly lady. And I got all these text messages and I couldn’t check the text messages. I am not saying that is a frequent use case, but it would have been really useful if something came up here. But then there are two other aspects. One of them is it doesn’t have to be. Of course. This is a little bit. Polarizing on purpose. It doesn’t have to cover the main vision area. Of course not that you guys are saying it, but I could imagine such a thing as maybe color dots on top.

Frode Hegland: So you get alerts for different kinds of things. Maybe even we’re talking about safety, like at a university. If there is some kind of a bad thing happening, you get a text, but you also get, you know, literally a red blinking light in front of your face. And then finally a little bit along the Google Glass thing. Imagine if you can take a picture, a low resolution picture, and instantly share it with somebody you care about a lot, so they can then see in the same field of view, what you saw just as a little picture to share it. When I was suffering with Danny in Washington State. Just kidding. Danny was a wonderful trip. My wife would send me tons of videos of my son, so that was really nice. It helped me stay connected to where I was, but to have just a little bit of look, what I’m saying right now might be fun. So there’s a huge swath of good and bad options. Fabian, it is your turn. What are you showing us?

Fabien Bentou: I’m showing you the meta Ray-Ban. I think the second version. Why I’m showing you this is because there is this little dot on the camera when it’s recording or when you’re taking a photo or whatnot. Yeah, I. I think it should be red. I don’t know, it sounds like a small thing, but I think we’re so used to this, namely that when an electronic device is on and when it’s recording, it’s red, it’s not there. And I think it’s breaking some kind of social norm or some kind of How do you say a signal? We used to. And I think it’s done on purpose. They don’t want to freak people out, especially meta being meta. So I think, yes, it might sound small, but I think we’re all as, let’s say citizens going to adapt to this, but it has to be done the right way. And I think this, for example, is technologically beautiful. It does work and it’s so light and etc. but in practice, it’s like trying to trick us. I would argue and, and more broadly, yes. I want to add also to everything that’s been mentioned. I just for the I mean, it’s kind of my job so it’s a bit different, but I have like four headsets right there on my desk in front of me. I have seven computers from the smallest one to the biggest one. And yet when I have a conversation with you guys, with you all I, I don’t know if you noticed, but normally I don’t. For example, put the phone on the table. And if I can, I won’t even have it in my pocket.

Fabien Bentou: I will try to have it in another room. So it means I’m super excited by technology. But I also know my own limits. I know that if I feel my phone vibrating and I don’t have, you know, a new one that is at the hospital, so it’s not going to be actually urgent. I want to spend my time with you. So I have a kind of I think, Ulysses pact. I’m going to set the temptation away. And I think it’s both us being mindful of the technology and how it might, through dark patterns, kind of break some of the social interactions of quality we must have. Otherwise we’re just so interm of conversation. It’s completely pointless to even have a conversation. But it’s also not the technology per se. If we have some kind of hygiene on how we use that technology, and then we don’t allow us to use us for its own goal or the goal of whoever is manufacturing it, I think it’s okay. So I want to keep I want to be excited by it without being oh, yeah, let’s accept everything, because indeed there are some social risks to it. But I don’t think it’s also good to kind of saying that it’s inevitable that it’s going to stop a conversation. I have my super fancy phone, my glasses, whatnot. I want to talk to you I want to talk to somebody dear to me. I said the thing away. It’s not like, stuck literally to my face or whatnot. So it also feels the ball to be mindful and detached from it.

Speaker8: Yeah, that’s.

Frode Hegland: Yeah, really important for me. And I just wanted to say I was just texting with Bruce. Bruce Horn, and he said the Oakley radar pace running cycling coach is what he worked on developing the. And I think it’s mostly audio. He’s going to try to join us, but he’s currently at work as you should be. Brundle. But I’m glad you’re here. Just to get a perspective of I think it’s mostly the audio side, but they’re still smart glasses, and there are so many things I could imagine. They could even vibrate right, vibrate right or left for navigation. Anyway. Brundle.

Brandel Zachernuk: Yeah. The the issue with the vision that like general sort of sort of colonization of vision that that can mentioned is, is an essential sort of danger that we need to be attuned to. What the relative risks are. And I think that it’s a sort of a combination of signals and legibility and conventions and what we understand to be the sort of because we all understand vision to be more or less the same sort of bundle of things in terms of what we will be able to get out of an environment, what we’ll be able to notice and see. That’s not true, but it’s been true enough. You know, some people will be able to tell whether some wheelie bins have been kind of shifted six inches since last week, and other people will barely be able to recognize that they’re the cars are one color or another, but there’s enough of a common basis for what we understand to be the kind of relevant and critical information in the hierarchy that we will see as we walk down the street. Partly because we have assumptions about the way other people’s minds work, which are, you know, varying degrees of. Correct. But also because we have a reasonable basis for being able to socially interpret the sort of level of level of attention that somebody has. So if somebody is lifting up a map, then they’re going to be impaired. And the sort of the frequency with which somebody is lifting up a physical map is so rare that it’s not an issue, or it’s not such an issue that people are sort of feel put out by the fact that somebody is lifting up this map.

Brandel Zachernuk: They’re obviously having less of a fun time than you are by having to deal with them as well. One of the things that is, you know, so challenging with the glasses is that there’s no mutual legibility and the color of the light is a big deal. But there are also other things that can be done, like you know, for better or worse, the sort of the mutual surveillance that these devices can put at one one of one another under would have the ability to be able to kind of make inferences and guesses about the relative kind of nature of attention that people are paying to things in their environment. You know, whether somebody is walking like they’re paying attention versus whether they’re walking like they’re impaired. Something that I often think about as I’m driving down a highway is how quickly somebody is slaloming inside their lane, and what sort of implications that has for their attention. I’m kind of confused that no aspect of self-driving cars has any kind of attentional modeling of how dangerous a given driver is, or that’s not part of the discourse, at least. But I kind of think that there will be, you know sort of meatspace kind of non-digitally mediated measures that are a little bit more sophisticated, that allow us to be able to understand how well people are paying attention to each other, but that that that could also be digitally mediated, that whatever signals there are present about how people are using their devices might be able to kind of be brought to bear against it.

Brandel Zachernuk: And, you know, I am conscious of of the the challenges that the solution to technology is more technology. But I also think that like a hybrid environment where we have the ability to make kind of make guesses and inferences about how people are experiencing the world will help us sort of live together within it. And so, yeah, that some of that is convention some of that is sort of learned physical attention of people being able to read those signals, like, like you were saying about Nelson making eye contact, that’s a tiny signal, you know, just a tiny signal. If you think about how small his eyes were and how fast a trucker is going in, in, like in New York, like, those are those are all very, very little things, all sort of being kind of responded to. And so in that same vein, like once we know what to attend to in terms of a technologically mediated thing we might have enough, we may not and then we might need regulatory and legislative intervention of the kind that maybe Fabian is well positioned to be able to kind of influence. But yeah. Like being able to have a mutual sort of agreement about the relative sort of aspects of attention and legibility within an environment is something that’s going to be, unfortunately, a slow burn, and a lot of people are going to get hurt before that. So let’s let’s fix it.

Speaker8: I’m just going.

Frode Hegland: To say really briefly, Fabian, I saw you have your hand up. But Tesla does do that. They check the quality of the driving, how safe you are, and if you don’t show a certain level of safeness or whatever, you don’t get to do certain upgrades and stuff. So yeah, good and bad. Not that endorsed, but that.

Brandel Zachernuk: But they don’t make they don’t put like the score of other people’s driving around you over them and things like that. They’re like, this person looks like they’re pretty bruised or this person is just being a real jerk. I know that they cut people off like three cars ago, so watch out as they overtake you on the other side.

Speaker8: Yeah, that’s.

Frode Hegland: A really good point. And that’s related to what I just pointed pasted in here. And that is Danny was talking about safety. Depending on what kind of overlay we have, if we have a microphone in this thing, maybe it can check if someone’s being really aggressive. Maybe it can hear with an AI interpreting people in a different way than we can. So we can actually quite literally flush reds. Like, this person is getting a bit aggressive or flash red. If you start talking to someone who is a mug and can please.

Ken Perlin: Sorry sorry I was muted. I spent about half my time in Manhattan and about half of my time in Kansas City. And when I’m in Manhattan, I walk and take a subway. And when I’m in Kansas City, of course, you drive everywhere. And certainly, you know, when you’re walking down the street in Manhattan or you get on the subway, you know, immediately if somebody’s going to be a problem, you know, you’ve got your spidey sense. And you know how to use it. When you’re driving in Kansas City there’s a whole different spidey sense, you know, a car is, you know, a little too fast. They’re switching lanes too, too quickly. They’re like, it’s a different language, but it’s a language that you learn. So I would just like to point out that we actually do use our human intelligence to assess pretty accurately what’s going on with other people very quickly, because it has to be very quickly.

Frode Hegland: Yeah, absolutely. I was just thinking in cases of people may not have that sense for various neurological reasons. And also, you know, our watches now will tell us if we’re in a really loud environment. So I’m just saying there are different kinds of things that can be measured. We may not notice over time. Fabian.

Fabien Bentou: Yeah. To to briefly go back also on the the building the glasses part just I did not plan to do this at the beginning of this conversation, but while starting to hear the presentation I received yet another gadget. Not going to detail it but it’s very raw. So I basically 3D printed a case for it so that I can travel with it, with it in my pocket. And I 3D printed this. During the conversation, I just went downstairs now in my basement to pick it up. So it’s just to show and it’s it’s not amazing, but it’s like, I don’t actually know the precision of that thing, the 3D printer, but it’s sub millimeter precision. I just download the file, I’ll put it in the comment. Also, just for reference from the internet, that somebody is done I discuss and boom, I have a physical object in my house. So it’s again another illustration that. I think a couple of years ago, making your own 3D glasses, you would sound like a madman even suggesting it. Now, with the kind of digital manufacturing a lot of us have access to doesn’t have to be in your actual basement, but, again, can be the fab lab at the university or whatever. Why do I highlight this is, again, to promote this kind of thinking like, let’s make the glasses. Let’s if we say, oh, the red led of the meta glasses is not correct, we can literally, instead of talking about it, we talk and we build, and then we talk again and see how it evolves. But we build it because I discovered a couple of days ago you can have pick and place machine that goes a couple of thousands of dollars.

Fabien Bentou: I thought that was costing hundreds of thousands of dollars, but you can have them, like in a 50 by 50 or 2m by one, like the size of a table like that fits in your workshop, laser cutting, etc.. So all this kind of small batch manufacturing, but still precise and digital, it’s like the democratization aspect is there. So my, my, I don’t want my, my take on this is my kind of call to action is let’s use them like it’s. I really am so hyped. I’m like a kid. I think it’s like, how can we have access to such things? It’s really amazing. But if we just have access to it as a kind of. Concept, it’s pointless is as if we didn’t actually have them. So now that we do have them. Yeah, let’s make glasses. Let’s. It’s not. It doesn’t even have again, this thing doesn’t look good, but it’s functional and then I can iterate on it. Okay I want another version with that. Other sensors that have there is going to be a perfect fit, etc.. So I think yeah, I’ll put a couple of things. But digital manufacturing now, even though it has imperfection and it should not be sold as the solution to everything for this kind of use case, like a prototype in short batch, let’s say here I see what 12 faces 3D printing cases doing PCB for this soldering couple of version. It’s definitely the kind of scale that is both feasible and even economically viable. It’s not going to cost like huge amounts. So it was just like a very practical example to say it’s not theoretical. It’s now.

Speaker8: Yeah.

Frode Hegland: Thank you. Mr. Mark.

Mark Anderson: Yeah. I was just thinking on this sort of situational awareness. I very much follow on from Ken’s point, which I agree about. You know, this fact that in different places you still have really quite sophisticated turns. But I was also reminding myself, and I think back to when I, my youth, when I was watch keeping in the dark and how flexible I might be on 6 or 7 audio circuits. But the situational awareness, you know, when working basically in totally moonlit night, working with things like helicopters and no lights, you really, really had to work incredibly hard at situational awareness. And the reason I mention it is just how little it took to completely break that, which is one of the which is one of the interesting challenges for us. That doesn’t run, for instance, counter what Toby has just said because. Absolutely. Yes. I really do think we should try these things as we didn’t try. We won’t know. But it’s it’s the issue of how you move that to scale at a rate that doesn’t run faster than the rate we as the humans using it can cope with it. I mean, I that’s not a I hope that doesn’t sound dystopian because that’s not my take on it. But I think that’s a challenge. And I just wonder actually, if there’s anything that’s out in the public domain, I imagine the people who are doing head up displays, you know, the America, the F-35 and things where people are flying effectively with a, you know almost like a virtual display. It must be I’m working on a very, very high workload, and I’d be interested to see if there’s any interesting sort of stuff that’s come out of that that might be pertinent to us.

Frode Hegland: Did you see the guy jumping out of an airplane with a quest? Excuse me? With an Apple vision probe.

Speaker8: Well, somebody has to.

Frode Hegland: The amount of dumbness that’s coming out of this incredible thing is really a fascinating meme. Leon, please.

Leon van Kammen: Now I really I think everybody’s going to check that video now. Now, you mentioned it, but yeah, I was also thinking that there’s also. I think there’s still also a sort of niche. It’s the the human centric niche. And I would like to compare it with these glasses. Like, I’m not using it for, for games or other things. It’s really for me to make my life a bit easier. So I have minus one. So it’s it’s really useful. I really love wearing it. And I also had I was able to go to a place where I could pick the frame. And so this was a really personal experience, and I, I can also imagine that there will be Well, it would it’s not crazy to assume that people with other problems would like to have tailor made glasses as well. For example, I was thinking a bit I have tried fabienne’s. Monocular. I really liked it. It was text, but and it was monochrome. And I was also thinking like, hey, if if it would be it’s not monochrome. Is it color? Okay, so it’s even better. Anyways, I was thinking that sometimes when I’m driving sometimes there’s, like, four roads and and we’re going in all kinds of directions, but it’s not always easy. I cannot see the the lines all the time. And that’s very especially when it’s dark.

Speaker12: It’s very.

Leon van Kammen: Intense. And I wish there would be some kind of throne, like lines projected so that I could sort of see you know, where the road is. So I could imagine all kinds of sort of layers or helpers also maybe people who who have a bad balance that they maybe can see something which keeps their reference a bit. I mean, if you have an ear problem, then or tinnitus, then this can mess up your balance a lot. So I could imagine a series of glasses which are not the fully immersive one, but just simple glasses which make a person’s life a little bit easier without any apps or games, but just very simple guides projected on top of their reality. So that was just my my thoughts there.

Speaker8: I thank you. The challenge with.

Brandel Zachernuk: The challenge with something like that, in contrast to what I presume the monocular is doing, is that having a head up display is very different to having a world registered head up display in the sense that like Google Glass didn’t bother trying to reconcile its immediate exact position with the information that it was displaying and having lane markings beyond saying like you are in the left lane, you are in the middle lane, you are in the right lane kind of thing. That would be possible with something without without without direct, sort of immediate world registration, but the kind of apparatus that’s necessary in order to be able to make sure that the things are lined up. That’s a that’s a really hard problem. And so, you know, obviously some things have solved that. Vision Pro is Apple Vision Pro is pretty good at it. But you need equipment. And so the the big challenge is like, when do you under what circumstances are you willing to sort of pay both in the immediate, obviously direct financial cost, but also in terms of the ergonomic and power and performance consumption, cost of, of registration versus merely having that information available. And so, yeah, there’s a, there’s a whole constellation of circumstances in which something that is absolutely as light as possible is, is, is better. And then there are places where you, you need to to pay those sort of human factors, social, ergonomic, financial costs because of the fidelity of those things. And all those things will all, will all converge as people figure out how to make those things easier. But there’s, you know, there’s going to be a disparity between what works for where for the foreseeable future.

Speaker8: I don’t know.

Frode Hegland: Can go ahead, but just wanted to highlight the chats. The stats of of this monocle thing is quite crazy. Please have a look, guys. Ken, where are you?

Speaker8: There you are.

Ken Perlin: I’m here. So I wanted to comment on that. I mean, I when I’ve been talking to people about this, like, since the 90s, you know, for the last 25 years, the conversation has always been I mean, at least this has been my view is that there is the easy thing which is not that interesting, and the hard thing which is essential. And the easy thing is I can put a display in front of my face, but it’s only of limited relevance. And the hard thing which is, oh, my light just went off, but it doesn’t matter. But the hard thing is my head headset is aware of the world around me, and and the display appears on that wall or in the air or on that table or whatever as I move around. And the reason that doesn’t bother me and it’s never bothered me, is we know that Moore’s Law continues to work. And, you know, I remember many of you remember when video chat was a fantasy, you know, and until about 2000 or so, until about 24 years ago, people would try to do it and, you know, they’d get fired from their company because they’d be the designated, you know, sacrificial lamb who was handed the video chat project because it just wasn’t ready.

Ken Perlin: And then it was ready. And now, of course, here we are zooming. And the only thing that’s really changed is the combination of computer power and, and connectivity. And that just continues. So there will come a crossing point. Just as you know, my graphics algorithms, at some point you can only use them in movies because it took an hour of frame. And then at some point, Moore’s Law came along and it’s like, oh, now it’s real time for games. But the algorithms didn’t change. What changed was, you know, that tidal wave just moved along and it was the same thing. This is why Steve Jobs waited until 2007 for the iPhone. If he tried it in 2005, it would have been too early in terms of, you know, the capability. So I’m not worried that none of this is science fiction. It’s just it’s just and one of the nice things I think we all understand that Apple did that did us a favor with this thing is said. Yeah, it’s a little too early, but let’s all start thinking about it because it’s going to be this very, very for them important market and for us important paradigm shift. That’s what I want. My $0.02.

Fabien Bentou: So I’m very briefly I put another set of glasses. So there are some bows. I got them for a workshop in New York five years ago. Maybe I forgot a long while ago. Point is, I used them until they don’t work anymore. I think the contact in one of the branch is broken and it to fix it. But those ones, I wear them until they died out. And until there is no battery, the Google Glass or the monocle. I wear them until my experiment is done. Because unlike, for example x or glasses like the quest or lungs or I imagine the Vision Pro I look at an object that is really close to my eye in practice, and the focus on it is straining. It is just tiring rather quickly. So until there is like a real reason, like information displayed there in the moment for the purpose I need, for me, at least in term of how my vision works. But I imagine for most people it is just too challenging. So I don’t want people to imagine that using this and using that in term of comfort is the same. Like I would not imagine. I mean, I did try actually use this also the physics blade or this kind of head mounted display where you have just an overlay of text in it. It’s really demanding, whereas this where it’s just audio, so I wear them, I’m going to walk around the park, I’m going to run around and whatnot. I just can wear and actually do wear them as all day long, basically. And it’s a really different thing, but it’s also a different medium. It’s audio. It’s it’s very much I don’t need the in context. I decide the context. And it’s just like listening to music or a podcast. So it’s also a little I don’t want to say warning that this kind of overlay display, at least in term of human vision, as far as I know, is quite straining, quite demanding. So it’s much lighter, much thinner, much a lot of qualities. But in terms of usage it’s also challenging.

Speaker8: Robin. We’re supposed to.

Frode Hegland: Be talking about the future of text.

Speaker8: Now. I’m just. I’m just.

Frode Hegland: Teasing you. I wrote something similar in the chat. There could be a minimal amount of text on the display and 99% of it. Audio, right. Because Randall was coming up next, he’s told us to think in terms of spatial things about not wasting pixels. Like pixels are cheap. Now, there’s same thing here. Even though you have a display in front of your glasses, doesn’t mean you need to use a lot of it. Even if you use incredibly little, it can augment other senses. So sorry for teasing Sorrybut.

Fabien Bentou: Just a super quick note on this. It goes back a bit to what Kenyon was mentioning before is like having the information or displaying it. I don’t want to say it’s trivial, it’s easy, but in some ways it’s solved. But having this exactly in place and in context, as far as I know, it’s not I don’t know how to do this, hopefully. I mean, I don’t see a specific obstacle that would prevent us down the line. I want to say a couple of years hopefully will get there. But the right information in time and in context, like every time I have a conversation with people, oh, we just put, I don’t know, the number of spoons of whatever moment in the recipe. I do this it’s trivial to imagine to do not so much. I mean, for this example, I can imagine some solution where you indicate I’m at that step of the recipe, but otherwise that it just does it. I don’t think it’s trivial.

Brandel Zachernuk: On on the on the text front. One thing that I, you know, I will absolutely attest to is that text is just what we call the sort of the arrangement of the maximal sort of density and relevance of information right now. And, you know, something that has really comes through to me as I’m reading Marshall McLuhan still in VR. It’s a it’s a slogan. Is that there are all kinds of ways that I want to arrange the book in order for it to sort of do the job that it wants to do in terms of these chapters that are more or less in parallel, and being able to kind of construct these, these structures that would be relevant for reference and things like that. And so we we should be open to those forms changing because it’s mostly commercial expedient, you know, like manufacturing and commercial expedience. That makes the book what it is today. And my experience of, you know, cutting what is it technics and civilization in half just because it was more convenient for me to carry it that way. Should be proof positive that we we get a choice in these things. And in the same way that, you know that Fabian is talking about being able to make our own choices with regard to the form factors of technology, we also need to make be able to, to consider that to be within our purview in terms of making choices about the form factors of information.

Brandel Zachernuk: Right. Text. Another point that I wanted to make, though, is about Moore’s Law and timing is one that it’s not just a sort of a magical incantation, insofar as Moore’s Law actually also tells you when things will be ready. I went to a really neat talk recently by the former head of Dolby Labs and now head of Apple Vision Products Group, Mike Rockwell, and he told us a story of sitting down. I don’t know if you’ve spoken with him about it, but in the early 90s, running the numbers to figure out when a headset would be ready. And he said somewhere around 2016, which is when he joined around about when he joined Apple for the benefit of actually trying to to make those things work. But the other thing that I would say about it is that, like 2016 is not 2024, and that’s because Moore’s Law gives you the upper bound of what all of the sort of effort and money and interest in the world will be able to give you in terms of being able to get those pixels in that silicon. And one of the other things that you need to do while you’re sort of expecting Moore’s Law to happen, is to make sure that all of the transitional points through those curves are also adequately profitable and valuable.

Brandel Zachernuk: So you need to get not just, you know, you need to stay alive from here to there. And make sure that there are useful intermediate points along the way. And the more that those can be recruited, that that the, the more guaranteed you are being able to kind of meet those, meet those points, you know, and I’ve said before like that silicon insofar as it relates to, to logic gates was on that trajectory long before the silicon that we use for doing display was on that trajectory. And now we have the same kind of motivational rationale for, for photovoltaics. It is now on that trajectory. And we can expect those kinds of gains over that time period. But they like we all we all for over the longest time we only had the logic gates, ones that Moore’s Law actually comes from. And so, you know, like hooking those things up also sort of requires an entire industry of demand as well. But, you know, when we say, like Moore’s Law will take care of that we kind of know when the best case. And so we can also sort of aim and plan and think about how we sort of make sure we can shepherd it along toward those, those, those glorious and terrifying endpoints.

Speaker8: And there’s.

Frode Hegland: Two things. But I have to say first, now that Moore’s Law has become so many times mentioned just want to put recognition that it was actually Doug Engelbart who came up with that after his scaling study in wind tunnels. And he did actually tell Moore, and he actually acknowledged that that’s where it’s from. So just for the record, put that down. But the two things of relevance to this one is I saw someone with a Vision Pro, but they were wearing a baseball cap, so the strap was on the outside and they said it made it comfortable because it took the weight and distribute it over the whole head, which I think was really, really clever. I’m probably going to try that. But I’m also thinking along the lines of of glasses, which are interesting. What about baseball caps? Or are you going, oh yeah, she’s going cool. Yeah. Even a baseball cap. You have the sound, you can have tactile stuff in your head. And also you have this little flippity bit which can give you a tiny bit of information if you need it. And then finally virtual graffiti Douglas Adams whole idea a long time ago. Imagine if you decide to turn on a specific graffiti at a specific venue or event. You walk up to something, you get a little bit of a text of what it is. Of course, you can do it on your phone or other devices, but it may be a way to leave messages that are only in that location anyway. So many opportunities. Ken, please.

Ken Perlin: Yeah. So the I’m just to respond to that for a moment, I think the key cultural moment in the United States is going to be when people are wearing these while wearing their baseball cap backwards. That’s when we’ll know it’s hit the heartland. But I wanted to respond to what Fabian said. I have three pairs of these, which all serve different purposes. These are the sunglasses that I wear in a nice sunny day. I’ve also have my Bose prescription glasses, which I wear. You know, they also serve as prescription glasses if I’m driving or if I’m or if I’m watching a show or something, in which case I usually if I’m watching a show, I don’t usually keep the audio on. But I would like to point out in terms of the relationship between the audio and the future of text and everything we’re all doing. I’ve already mentioned that our group, you know, everybody has these guys and we are we are constantly doing stuff with multiple people in the same room playing around with things and, you know, and pointing at things and picking up objects and virtual objects. And we have we’re working with a another faculty member at NYU and Rusinska, and her entire lab is focused on better and better immersive audio and, you know, matching room acoustics and things that are technologically beyond what we can do.

Ken Perlin: But we’re happy to work with them and let them do it. And and right now, in our webXR based system, we, we have enabled the web kit audio. And always when it’s running, whenever anybody is talking, the audio stream is being turned into text. And as our students are doing various and because everything is is networked, if I talk into my computer, it that’s going to go into my headset. And so our students are constantly asking the question, when is it appropriate that I said something? And that’s going to change the content that I’m seeing around me. And it’s not that they have the answer, it’s more that they’re just taking that as one of the modalities, in addition to pinching and pointing and looking and all these other things. And I’d encourage this project to be thinking it’s it’s not a competition. It’s another natural form of human input that can be used in the future of text. However we display it and whatever it ends up looking like.

Frode Hegland: That’s really, really important, the sound and context of all of this. Now, for some reason, in the middle of that, you made me think of your lab and imagine if someone is in a situation where they can copy something and extended reality. Of course, they should be able to do that copy with context, but imagine if that they do this, they do an action. So it goes to their eyes, their glasses. Because, you know, we have Fabian. Fabian, have you tried what Andrew’s done, by the way, in our tests. Because we now have. Okay, a quick aside. If you go to our future text lab and go to current testing is actually made it. So you go to the site, there’s a big sphere, you touch it, you go inside our experience, and then the sphere is on your wrist where your watch would be. You touch that for the control panel that we’re thinking about that is 100% taken from you, Fabian. So thank you. But imagine if you put something on your glasses and then that thing is then because it’s open, open source, open network, open everything. When you have these small glasses, you have, you know, like a little yellow sticky, you know, so you can actually take information with you there to move things around.

Frode Hegland: It goes to the idea of something we discuss a lot metadata, openness and connectivity. You know, we shouldn’t be stuck in one place. And similarly imagine if you’re walking around. And you experience something during the day. Even with these simple, simple, simple glasses, you can make a note and it knows where and when you made that note. So when you then get into a richer environment with Apple Vision or something like that, you have access to the same information, it can then go into that. This is one of the reasons a third of our Sloan thing is metadata, of course, primarily visual meta. But it doesn’t have to be. But there has to be metadata to make any of this work. That was that was my soapbox for the day. Number ten. So we’re saying, yes. You have definitely stirred the hornet’s nest. You know, poor Fabienne comes in and screams into the void. So you, you know, you’ve extended that echo, which is very, very good.

Frode Hegland: It’s an incredible time to be alive. I’m going to find an Apple commercial. So please speak amongst yourself. There was a really, really cool one that I think was 20 years too early. One second.

Brandel Zachernuk: Oh Ken, there was a post that you made. I feel like it was probably over 20 years ago. I’m just. If you recall it it was a sort of a it was a joke about. It was a series of joke products that Silicon Valley would make. I think it might have even been before the the early 2000.com boom and talked about multicolored objects being like that were still useless, but were attractive because they were in multiple colors. It might have been a satirical take on some of the things that were cropping up around the multicolored iMacs at the time. Do you remember that? Does that ring a bell at all?

Ken Perlin: Yes, I do, it was in 1997 when I started realizing that the entire culture, as as my theory was as the baby boomers all started becoming old enough to have money and power. And this was the resurgence of Steve Jobs in the mid late 90s, tapping into that with Toy Story and the iMac and even the US money was getting simplified, and the resurgence of the the coming back of the the Volkswagen Beetle with a Bud vase, this time as and basically it was it was the baby boomers all embracing their power and everything became a grown up version of play School with, you know squared off rectangles and, and you just saw this whole design thing telling us, you know, we’re in charge and we’re playful and we’ve got money and nobody can stop us. And it was a whole thing. And so I actually came up with a parody where about Lickable iMacs, you know, just because it was, it was just seemed to be what was going on with this infantilization of, of an entire generation. But that was, that was the that was a while ago. That was that was almost 30 years ago. So that was a different time. Now. We got other stuff. Now, of course, you know, the world is ruled by Taylor Swift and we we have to move on.

Speaker8: I put that’s just perfect.

Frode Hegland: And I just put a link in here to that video. If you want to, that would be really good. Time to watch it obviously Muted it’s just absolutely wonderful. In terms of the infantilization someone pointed out recently to me anyway, that fashion is basically dressing like what you don’t have now. So if you have a really messy world, you want to be very elegant, like in the 50s. If you, you know, whatever it is, it’s a reaction against what you have.

Frode Hegland: So yeah the Lickable iMac I like that.

Speaker13: I could probably.

Ken Perlin: Find it somewhere, but I’d have to look for it for a bit.

Speaker8: I will do that, I think, I think.

Brandel Zachernuk: I think, I think it’s this one. It’s the milksnake edu the clickable Emacs page.

Ken Perlin: Oh, great. You found it. Well done.

Brandel Zachernuk: Thank you. Yeah, I was, I was bringing it up in conversation recently to friend who runs something called the Near Future Laboratory. And I didn’t have quite enough to, to, to grab on to it. And I was looking through your blog, which only starts in oh seven. So it wasn’t that oh eight.

Ken Perlin: Starts in 2000. January 1st, 2000. Yeah.

Speaker8: So I did.

Frode Hegland: A an ad in the back of a taxi here in London a few years ago that said, download this software and your computer screen will become a sun thing so you can get a tan, you know, what do you call it? Solarium or whatever. It was absolutely ludicrous. Of course that can’t happen. But it was so stupid. I looked up the URL and it was basically public service announcement against getting too much sun and getting cancer. So the whole licking your screen thing made me think of that. So that was perfect. Thanks for that.

Speaker8: Yeah. I mean.

Brandel Zachernuk: The challenge I’m confronted by like, yes, we we develop these spidey senses of walking in Manhattan. Driving in Kansas. But not all of us. And not all equal. Not not the same at the same amount. And the sort of the mythical, like capacities and beliefs that people have about what it is that technology can do, that you can just some people believe that you can just drill into an iPhone and then you’ll get a, you’ll get a 3.5 jack instead of the lightning or the USB-C now that it has. And it’s just like that’s there’s no reason why that would be true, but it simply isn’t going to stop people. People have always been. Across a range of gullibility. But I, you know, I what technology does to catch up and what people how people know, how knowledgeable other people are about the way they’re wielding it is something that I. I’m a little worried about.

Speaker8: Absolutely. Robin.

Fabien Bentou: Yeah. I put a quick tweet in the chat, but I’m not sure people are aware of this. And I’ll put an example of documentation in term of when you mention putting objects in space and you come back with another room and they’re there WebEx or anchors allows the things, the 3D objects that are there to remain there, namely that if I put virtual screen in my office and assuming the operating system of the device has recognized my office as my office which is what the normal room setup does, and then I go to my living room or to my office in the Parliament, and it recognizes it as such. Then it will not put my screen there it will put whatever I decided to put there. So it just a little clarification for people who might not be aware that headsets today are able to recognize a room based on feature points. And this it means when you get in that room, not only you have, let’s say, a three by three space, you have a three by three space that is oriented, let’s say, toward the Or you’re toward the window and thus the object or on that specific wall, it’s not to the millimeter, but it’s about to the centimeter. And that’s relatively recent. A couple of years ago, that was not the case. And now in the specification, you say I have this room, and in that room I have this object in that position. So in term of post-it note, because I posted also a demo I did a couple of years ago, I forgot, but a little while ago for Google Tango and you did not have, let’s say, the memory of places you would know you were in a place. But let’s say having a whole catalog of places or you had to sift through. I mean, it was not that easy. But now to say I have a catalog of places that are designed as such, and they are the object stays there. It’s not the future. It’s the past already.

Speaker8: Yeah. Thank you.

Frode Hegland: Ken. Go ahead. I have a reply to you, so please go ahead.

Ken Perlin: So we we use that all the time in our work. And just a caution is that inaccuracy is really annoying. Because if you and I are manipulating an object, and I see it here and you see it there, then there’s like a sort of a cognitive disconnect. So we have a backup plan where we can always, like, do a slight adjustment manually and say, okay, let’s sync with each other. But we, we do have, you know, virtual art on our walls that I see when I put this thing on. And we have different demos that live in different places in our lab, and it’s good enough for that. But their their anchor thing hasn’t seems to have inherent limitation, as you said, of there’s like, you know, you could be here, you could be here, which is exactly problematic when you and I are trying to manipulate the same object. So unfortunately it’s great. But you also need a backup plan for those cases.

Frode Hegland: So your Manhattan story is very provocative for me. And I think Hussein and I are a little similar when it comes to kids. Mine is six and a half. All of you have heard about Edgar a million times. But this weekend it was Chinese New Year, so we went into Soho. It was completely overloaded, but on the tube on the way in. This is not just a proud data moment. It’s it’s relevant. He was surfing the tube. He wasn’t holding on. He was standing in the middle of the tube, looking, paying attention, not falling over something I remember from my teen years, I still do. I refuse to hold anything on the tube because surfing the tube is an amazing feeling. So that and he was now reading the signs. I mean, this is crazy to me. It’s not that long ago. He, you know, reading was so slow now so he could figure, he told us from the station to exiting everything, every change. So here’s my my thing. The text for the station name is, of course, explicit information. And we have that. But the ambient information, you know, how to stand, how to read, how to use your senses. Right. If if we really talk about this in a wider topic, this little silly thing about, you know, having a few colored dots at the top of your screen or whatever, if we as a community have a deeper discussion on that, we can develop some A paradigm’s rules, whatever that can be used across all media. Right. So we all know that a green top left, that means our partner’s home red top right, means that is definitely not the right person who is entering our home, according to the ring doorbell. So the whole trying to develop new senses in the maybe McLuhan sense with all these media, it can be absolutely amazing. I mean, already now the Apple Watch does different kind of vibrations depending on what’s happening, and it’s enough that, you know, most of the time, you know, you know, is it a time change or what is it? How can we use these new things for that?

Speaker8: That was a real.

Frode Hegland: Question, not rhetorical.

Brandel Zachernuk: First of all People need to have a playground to be able to explore and experiment with all of those sort of options. And then to some extent, there needs to be a degree of control. That means that people can. Can kind of choose and configure and, and construct those things. It’s really challenging. And yet like, I think one of my goals for WebEx are as I’ve kind of said before, like I think webXR is important and also so fatally flawed that it can’t be the basis for a more comprehensive use of the of a spatial web. It needs to be understood as an escape valve for the future to, for people to, to, to learn about things and then to be able to encode those in a way that’s kind of safer, more easy to author and has the ability for layering in accessibility in a way that doesn’t sort of oblige every individual developer to do it. So I think through testing and through through people being willing to experiment in ways that they’ll be throwing a lot away. But yeah, it’s it’s sticky to imagine that at the same time as imagining that it’s that work is relevant for a mass market. I don’t I don’t know that you could do both at once.

Frode Hegland: But testing is king when it comes to something like this, like we have in our documentation, we need to experiment to experience can.

Ken Perlin: So I completely agree with that. Everything that I’m doing with my group is entirely webXR, and the only reason that we can get away with it is because we’re a university research group. And I think of what somebody once explained to me years ago as the 110 100 rule, which so far seems to be quite accurate, which is I can do something in one unit of time that everyone says, well, there’s a concept that’s really exciting. There’s a little animated character or a procedural texture or something. Let’s all think differently. And then to actually create a working demo that is meaningful and reliable and people can actually go back to takes ten times as much work, and then to actually make any sort of a product that could be used by lots of people, even a crappy one takes 100 times as much work. So we want to stay as much as much as we can in the one where we are to learn. And that’s webXR is just great for that. It’s like, you know, your Arduino or your little shop in your garage, and that’s what we use it for. But we’re under no illusions that we’re going to make mass market products with it. That’s not what we’re here for.

Brandel Zachernuk: Yeah. And I want to make sure that that webXR works for the one that’s that’s absolutely essential for everybody’s imagined collective imagining. But yeah, it’s when people mistake that for something that that does a job for the hundred where it’s the issue.

Speaker8: Yeah. I mean.

Frode Hegland: This is probably because we’re, you know, winding down here in time as well. Really the good time to by the way, in the middle of that, I just got a weird looking email. It turns out it’s someone from the University of Maryland who wants to have a dialog with us, so that’s good. I really would like you guys to start Inventing in a different way. So Mark and I have been arguing in slack today over flat versus dimensional because we agree, right? But now how are we going to do it and why? Right. It’s so easy to take an engine and then explode it like a wiring diagram if it already exists. But in knowledge, what are the components? What’s the point? How can we do it? You know, Andrew is working really hard doing amazing things. But if we’re going to do the experiments, if we’re going to have those few minutes, what in the world do we want to put in there?

Speaker8: Right.

Frode Hegland: I’m very happy to have more espresso, wake up early and do it on my own. Some of us, you know, Fabian and me, we work almost the opposite how we look at things. But remember, all of you, even if it’s a sentence or a pencil sketch, I don’t give a flying monkey what it is. It is so valuable. And as Andrew and I were talking about last week regarding code, the uglier it looks now, the better it will look later because we show progress.

Speaker8: Right. It’s like a very.

Frode Hegland: Teachery type of thing to say. It goes for our own thinking. If we write some really crap interactions now, great, right? Nobody wants to just say that the final product, because this is kind of academic.

Speaker8: So.

Frode Hegland: Yeah, if you have ideas for that slack or if you’re not on the slack thing, just email me. Right. Begging. Mark, please.

Mark Anderson: I was just thinking, listening to Randall’s point about, you know, webXR maybe not being for the long term and I forget. And the question that led me to is, okay, so what implications does that mean for the information that we want to put in, in and out of webXR today, but something better tomorrow? So there’s a difference. Just the engineering within the ecosystem. And we don’t need to think about how we structure the information. Or is it different just I, you know, a parallel project. We got to the point where I said, yes, metadata. Good. And you asked them what they need, and they’ll say things like the file size. They know you know, that already. That’s not really metadata in anyway. So, you know, I’m sort of thinking, what are the things that we’re, we’re maybe not collecting that we might be or, you know, are we not are we not recording them, thinking about recording them in a way that will be beneficial to our use of these systems. Do you have any thoughts on that?

Frode Hegland: Please. Thoughts on that?

Speaker8: And parallel.

Frode Hegland: We should try to attend this lecture if we can. Yes. Sorry. Please talk.

Brandel Zachernuk: Yeah. So I mean, I have less thoughts on that and more. Thoughts on ways to have thoughts on that, which is to do stuff and pay attention to what is germane to it, and then try to listen for even the faintest signal of what are the attributes of it. You know, one of the things that I you know, the stuff that I’ve done with books in the form of the page turning the WebGL web page turned books, as well as the books with the covers, with the qualities and the embossing and the, you know, the silver print and things like that. Something that I feel is I like that stuff is just not that complicated in terms of being able to work back from an insight of what it is that that reading is, you know, people say there’s something about books. There’s something about having information in this way. I was actually over my daughter’s joining a scout troop, and the the parents of the people who run that scout troop. One of the guys is a QA, an audio at Apple, and we were talking about how. Webex and Zoom are not a good proxy for the in-office kind of organization of information. And, you know, one of the things that once we dug into it a little bit more was that there, you know, when you walk around a place and there are locations that you attach to people or even just to conversations between people, be it in the break room or whatever, like the water cooler effect is real, but it’s also innumerable.

Brandel Zachernuk: There are ways that you can describe what we do when we’re in a place, and we’re using that space to make sense of things. And so whatever method we have for being able to become more attuned to those, the actual characteristics of our interaction and what information is relevant being able to first recognize that and then come up with proxies for the representation of it is is really, really valuable. Like the it’s exciting that so many things are Turing complete. And that Turing complete implies this kind of functional equivalence and interchangeability of devices within a sort of a computer science domain. But what has kind of numbed us to is the desensitized us to is the qualitative differences that arise between things that are nevertheless incomplete. And yeah, like the differences from our sort of mate brains matter immensely. So the observational studies actually directly recording yourself or other people and and being able to, to, to really introspect on those micro interactions with the sort of attributes of the spaces and the ways that we do that interaction, those, those processing activities is a way to develop thoughts. And that’s where I and how to the extent I’ve had thoughts, they have come from but I, you know, there’s plenty more where that came from.

Frode Hegland: Yeah. And thank you. So. I’m going to ask you. Imagine you put on Hussain’s glasses. And you are in a bookshop. Forget PDFs for a moment. Believe it or not.

Speaker8: And you see a.

Frode Hegland: Book that looks kind of interesting. The metadata is available to the glosses because it knows the cover. The cover is unique, so it’s like a QR code.

Speaker8: My question to.

Frode Hegland: You is what should a pair. Quick invent. Have fun. Wow. Imagine you’re standing in New York City, obviously holding the book. What happens?

Speaker13: It tells you where you can get it cheaper. Oh, sorry.

Fabien Bentou: Go for it.

Frode Hegland: Yeah, yeah, I can please.

Ken Perlin: It immediately show you where else you can get it cheaper.

Speaker8: Okay.

Frode Hegland: We’ll mute. You can.

Speaker8: It’s like.

Fabien Bentou: I’m not going to be better because I’m going to cheat. I remember a conference on data visualization in Paris five years ago. I’ll look up the link. But there is a researcher I think he’s from. Oh, I forgot, I think I think he’s Irish. And he did precisely a demonstration of augmented reality on a bookshop. So I’m pretty sure there was indeed metadata on the side of each book. Did I read it? Did one of my friend read it? What do they recommend? Are there other books on the same topic or by the same author in the bookshop? So I don’t want to say don’t imagine. I’m saying do imagine. And you have to at least build on top of what this guy research did, and I’ll put the link in the chat.

Speaker8: Yeah, please.

Brandel Zachernuk: So one of the things that occurs to me as I read McLaren is the way that it’s so deeply referential, it’s so related to other things. And there are some of them I have, you know, I’ve read I’ve read Technics and Civilization and he cites that, and it makes me feel fancy to have already gotten it. But there’s and and the Shakespeare mostly I’ve already gotten through and so that’s that’s helpful. But, you know, like, they’re, they’re like, even in in fiction, there, there are referential qualities where the, the totality of the that rating body is richer for the ability to kind of know what else is across it. And so to some extent, actually being able to understand the coverage of the sort of implicit subjects and whether it sort of contributes positively or whether I feel like I’ve kind of saturated that area or if something is, you know, interesting, but has so little to do with what else I know, then I know a lot of the time I’m actually going to lose a lot by that, unless I want to commit to that sort of sector of that social network of reading. So today, to the extent that there’s this kind of this, this connected graph what I would be more interested in is the sort of the referential or different people’s take, you know, there are some things that are unambiguous about what McLuhan is citing, like when he’s literally citing you know, to Mumford, then that’s nobody can argue that that is a callback to it. Whereas in where he’s stylistically doing something or in fiction, when people are calling back, then there’s slightly more, that kind of thing can be slightly more up for debate. But that context that, that, that and the connectedness of that context is something that I would want.

Frode Hegland: Perfect, Ken.

Speaker13: I would say.

Ken Perlin: That if I’m looking at and I have a big science fiction bookshelf here at the lab, we have this giant, you know, enormous collection of science fiction books. And I would love to be able to look at my bookshelf and talk to it and say, you know, where what books are referenced, Asimov’s robots, or where was the first person who mentioned who talked about you know, spaceships or aliens or time travel or, you know, anything? You know, somebody who did a riff on Hugo Gernsback and who did that and when and, you know, and just I want to use my voice and I want to use my eyes, and I want to just look at the books and say, show me the magic. Show me where it is. And so that that would be what I’d want when I’m wearing these. I definitely don’t want to be typing anything.

Frode Hegland: I think we’re on the same kind of wavelength there because I wrote down something similar list of authors, other books that could appear floating next to it. I’m now thinking the level of Vision Pro rather than simple glasses, but logically, it’s the same thing, right? Citation tree going into whatever else is there. If there are images in the book, they can appear on a wall on the side. Table of contents come out really easily. Reviews Amazon and other sources could be available. I connections to previous books that you have read and owned to. Maybe there is a major contradiction that it’s obvious that’s been discussed or an agreement. Maybe it builds on. Maybe it’s basically a copy. Maybe also referring to your own work. These systems should know our own LM should know everything about our own work. Right? And of course, an AI summary references to places nearby if there is such a thing. I mentioned New York because it’s such a historical place. Sierra Leone. That would be really kind of nice. Also, books in the same store that may relate to it may refer to other people who have read it, what they said to it etc., etc., etc..

Speaker8: Yeah. Fabian, please.

Fabien Bentou: I also will put the link in the chat after. But I remember last year there was a paper on splats and basically you get a 3D model of a room, but there was one on splat in bookshop. And in it they basically use ChatGPT, if I remember correctly as an LM to do the interface so that if you were looking for a book on the topic, but you did not word it perfectly, they would find that specific or the closest book regarding to related to your request. And indeed in term of chatting with the bookshop that seemed, at least in that paper, as a pretty exciting experience, that you wouldn’t just query it in the sense of, let’s say, spatially query find all the green books, which already is pretty cool, but rather find things that make those query useful for you. So I’ll put that in the chat. But I think that that’s indeed the kind of not just programmatic interface, not just textual interface, but the combination of all those so that interm of usefulness it is a step up, like very obviously.

Frode Hegland: That document you just shared with us? I’m trying to add it to my library. And it is totally bereft of metadata, of course. Do you know when it was published?

Fabien Bentou: I think it was like six months or a year before the conference, before the talk. So I guess 2017, 2018 most.

Speaker8: Okay.

Frode Hegland: Let’s not give Mark a heart attack here, shall we?

Speaker8: We can’t talk.

Frode Hegland: About metadata in such loose terms.

Fabien Bentou: It’s been published properly, so I just gave a link to the paper itself. But if you look it up with that name and the author, I’m pretty sure you can find.

Speaker8: Did you?

Frode Hegland: Yeah, let’s say no.

Speaker8: Oh, it’s kind.

Frode Hegland: Of interesting searching for it, actually.

Speaker8: Because searching for.

Frode Hegland: Drawing blah blah blah, the title brings up all kinds of oh, here it is.

Speaker8: 2017.

Frode Hegland: Oh, there’s even a bit tech.

Frode Hegland: Right. That kind of illustrates our problem.

Mark Anderson: No that’s off that’s off Google Scholar and I haven’t. And against all my usual things I haven’t actually checked that for posting it.

Fabien Bentou: You just briefly if you believe it’s interesting. I can also ask him if he would be interested in giving us a short presentation on that work, and eventually update on it from.

Speaker8: Absolutely.

Frode Hegland: Absolutely. I’ll just add it to my library. Good. Yeah. That’s That’s really, really. Good. And Yeah, we’re really running low here. So I want to thank you for coming in today, and feel free to join us anytime. Don’t think that you were, like, the guest only one day. Do you have time? You’re you’re obviously very welcome. And it’s really liberating for us to think about text visually without having a huge, massive thing on our heads. You know, to think about it in a looser, bigger way.

Hussain Panjvani: Just want to say thank you guys for listening. And it’s been great hearing all of your thoughts as well. I would love to come back and just listen to you guys. If I have any further thoughts, I’ll I’ll email you directly through it, and. Yeah, we’ll go from there.

Speaker8: Yeah. Brilliant.

Frode Hegland: And I do really, really, really need everyone to write down some kind of stuff. Okay, we’re going to do two different things in parallel. And we need to all of us together with Andrew, prioritize Andrew’s time. We’re going to look at library level interactions where we have massive amount of things books, authors, whatever it is. But also we really need to have a magical. Opening of a, you know, let’s call it a book, forget paper, document or PDF write for the sake of thinking. You open the thing and it’s like one of those folded paper things, the beautiful Japanese or the children ones.

Speaker8: It’s so annoying.

Frode Hegland: In this community. We have such great discussions and ideas, but it’s so hard for all of us, including myself obviously to concretize some of this. And now is the time to do it because we can experience. Is it going to be cool in exile or is it going to be overly messy?

Speaker8: I mean, Fabian.

Frode Hegland: I can imagine, in your case with your user group, that some of the documents that you’ll be dealing with are very location based because, you know, you work with a whole Europe, so there’s going to be a lot of references in there to places. So that in itself can be a very interesting use case because, you know, you open up the document and I’m sure you have official glossary, so to speak, about locations, meanings and connections that can be imposed. For our Sloan stuff. It’s basically hypertext, the hypertext community. Do we have something similar there? Something to hold on to and spread out? Oh, you’re just running it down, aren’t you? The clock is ticking. I feel like we’re in class, and the bell is going to go in two minutes. And you’ve been given homework. Anyway, for those of you who have time and would like to, we continue on Wednesday where we’re going to be looking at what Andrew will have done by then. And I challenge you to improve my horrible control thing, which is like the simplest thing. There are many other ways we can go.

Speaker8: And

Frode Hegland: Other than that, I thank you for today. Any other comments or questions?

Fabien Bentou: I’ll tease you for a demo with AR and a robotic arm for next time.

Frode Hegland: Oh, okay.

Speaker8: See you Wednesday.

Frode Hegland: That sounds cool, right? Yeah. Wonderful. But hang on before we go, Rob, what’s your experience with the headset? So far? You’ve been so quiet, and you’re sitting there with a new Vision Pro, and you can run all the apps, which I cannot.

Speaker8: I had my.

Speaker14: Hand up briefly, but it went away. Oh.

Speaker8: I’m sorry.

Speaker14: No, I had I have things I want which I’ve mentioned. I just realized I wrote a whole bunch of stuff in January, and I don’t think anybody saw it because I don’t know how to do that in base camp or slack or whichever I’m in. The thing with the colorful icon.

Speaker8: If you sent.

Frode Hegland: Me just an email, I will post it on your behalf.

Speaker14: All right. I should learn how to do that. One thing I want to learn how to do is when I open a document, which I’ve managed to do in the headset it somehow highlighted a word, and I could then adjust the selection. I could have probably highlighted it if I knew how to do that, but I have no idea how that happened, so I can’t.

Speaker13: It’s a long press.

Speaker14: So it’s long press. It’s. Okay.

Frode Hegland: I also find that interaction really, really bad for me at this point. When I’m trying to read using a native app, they just.

Speaker8: Look, I’m not going to sing its praises.

Frode Hegland: Was that.

Brandel Zachernuk: I’m not going to sing expresses.

Speaker8: No, no.

Frode Hegland: But it’s early days. It’s experimenting, but I’m just trying to read, and suddenly things are selected. It’s like, hang on, my hands are must be allowed to do things. So that’s.

Speaker8: Yeah, but this.

Frode Hegland: Is the thing that, that we found in.

Speaker8: These.

Frode Hegland: In these environments. Often it’ll capture a gesture without us meaning to make the gesture and between gesture. Peter, please. Please.

Speaker15: Yes. One thing I really loved from the Cyberspace First Steps book was that notion of unfolding a space. So what if where there’s a reference in the PDF, there was a little sphere there, like a little marble. And then if you click on the marble, just like the controller on your wrist, that would cause you to jump and enter the document represented by that citation.

Speaker8: Can you.

Frode Hegland: Elaborate on that?

Speaker15: Well, if we embed like a little ball next to the citations in the PDF. If that citations corresponding to a document that’s available in the space when you click the ball, it could sort of like expand to envelop you, and then you’d be in the document that was being cited by the previous document that you were in.

Speaker8: Yeah.

Speaker15: So it’ll be maintaining sort of an analog to the controller on our wrist. Well, this would be like a little tiny ball embedded next to the reference in the document. And then clicking on that would enter you into the. Cyberspatial representation of the document being cited from the first document.

Frode Hegland: Yeah, we do need to work on what that cyberspace representation should be. And now we’re over time. And I thank you all for being here today, especially Hussein, who is new. And I look forward to Wednesday and next Monday. Bye, everyone. Bye bye.

Leave a comment

Your email address will not be published. Required fields are marked *