27 May 2027 (AM)

Monday 27 May 2024 (first Monday meeting)

Leon Van Kammen: Okay. I’m sorry about that. I had it wasn’t recording, so I had to restart the meeting.

Speaker2: I.

Leon Van Kammen: If Leon is there. Are you here, Leon? Hello. So today is obviously completely experimental and we have no real idea what we’re doing. Obviously we’ll be really good to go through what Adam is thinking about specifically and what Fabian is thinking about specifically, and of course, on what you’re thinking about specifically. We also depending on how much we want to do that, just going to make a link here. I made a Google doc. Can you believe it? Hang on one second, okay? I just did it this morning. It’s a public holiday here, so. Yeah, I wasn’t properly awake. Everyone. Yeah. So it will be useful. So, Adam, you nicely badgered me last time about the importance of being able to read one document. Obviously, one at a time. They can be next to each other. But you said the reading is really four, so I think we should start doing such things as just writing down the kind of things that should be done. When you’re reading what documents, as well as what we’ve been talking about when working in a wider space over on Good Morning.

Speaker3: So sorry. Can you explain it a bit more? This not being able to read properly side by side.

Leon Van Kammen: So Adam correctly. Okay, so I’ve been a little bit frustrated over the last few weeks with trying to get the group’s perspective on how we can work in some kind of a. Knowledge space. I called it a map. Sometimes we don’t have a better word yet, but you know what other elements we should be able to move around over here and over there and all that stuff. And what are the interactions? And we need to keep working on that, for sure. But Adam also said, let’s not forget, the user needs to be able to pick up, so to speak, one document and interact with it and useful ways as well. So we must not ignore that. So that’s why I put together this I want to say good morning. This is very good. I put together this Google doc where I haven’t really started, where we should write. All the elements we think should be extractable from a document and all the oh, actually, sorry, one second ago. Please bring me your Minecraft book. I’d like to show the guys. Okay. And what we should be able to do, that’s all just to really focus on that part of the experience as well. What do you want to talk about today, guys?

Adam Wern: Did you start, Leon? Yeah, I.

Speaker3: Think we both felt that we were starting together at the same time. Well, no. How about you start? I don’t have my. My brain has to start up first.

Leon Van Kammen: Yeah. Me too.

Speaker2: Yeah.

Adam Wern: So So one part is. What? What do we want to do? Kind of with text, with the academic text somewhat Sloan related or very. Yeah. Sloan related. Let’s say Sloan related. Yeah.

Leon Van Kammen: This is very relevant to start the day. Edgar would like to share this picture. I’ll show it. Did it come out? So Edgar has this book. It’s a minecraft book. But it’s obviously, as you can see, it’s a physical paper book. It’s not a filter.

Speaker5: And he is like a protector of the village. And I had that guy.

Leon Van Kammen: Open it up.

Speaker2: It’s a wow.

Speaker5: These creepers, they blow up and they’re zombies and skeletons. This is a village.

Leon Van Kammen: Yes, but don’t show anymore because the guys are coming this weekend. They can see it for real. Don’t show them everything. Some of it should be secret.

Speaker2: Okay.

Leon Van Kammen: No, they’re coming this weekend. I can see that with you properly. So that’s what we want to do, right?

Speaker3: Exciting.

Leon Van Kammen: And we want to have pop up books instead of exploded books. We can call them pop up books maybe.

Leon Van Kammen: So I’m sorry about the interruption. Please continue.

Adam Wern: Yeah. So one thing that would be nice to to cover today or soon is I want to hear about your. Your ideas for for the space and to text and intellectual text work. What you would like to do and if you would like to do any particular experiments assess that can inform us and how to coordinate what is overlapping and what is unique for each exploration and for the overlapping parts. Would be nice to see what we can do, do together in terms of interactions or or or Infrastructure and so on. It would be good. We haven’t talked so much about the, the different experiments that we want to do the more. More more directly or more more concrete on in what we want to do. Like your thing, Liam, with the contextual zoom, zoomable text that you show recently. That’s one thing. Or, and I’m interested in, apart from the regular client kind of spreading things around the how voice ties into everything, I think voice with voice interaction. Not just dictation, but voice control and some sort of hybrid thing between voice dictation and voice control. I believe it will be. Very. I think it will be perhaps not the dominant interface, but a very important interface for for, for the coming ten years. And it has started, started to take off.

Adam Wern: And I think it goes well beyond XR in that we will have ambient computing. We talk to the room or we talk to the well, it could be to the, to the mobile or your laptop as well, but that it will be much more important to speak to your computer. And now the with the LMS, we can actually make the system understand or translate it. Much more precisely. It’s not just hit or miss. And we have the opportunity now to do a bit more conversational voice as well, not just voice commands, which has been the dictation or voice command, which has been the major. The major two components now were at even with home home brewed computing. I think we have voice as a conversation and interface. And it requires very special. Considerations. I think how you by progression, build up a result you want. Well, that was one random thought. With the voice thing. But I’m also interested. I really want to know what Leon and Fabian is thinking here. As it will we know what fraud is cooking up sometimes. And we also know the official Sloane thing with Andrew. But it would be nice to see the other things over a long, long start here.

Leon Van Kammen: No, no, no. That’s good. I just want to say I have a little more on the Sloane thing. I had a lovely meeting with Dean on Friday, which was quite random, so I’ll. There’s some things came out of that that’s very useful for us. But when you’re talking about voice, I’m actually very supportive of that. Now, I wasn’t in the beginning. Obviously, voice and support the text interactions not by themselves would be my personal interest. However, in Doug’s NLS, when you wrote a command, you could always hit the question mark to get option for what the next logical bit could be. Right? And we’ve talked Adam, about how when you see on a pop up menu or equivalent, maybe in brackets and gray, it says, this is what you would say to speak this command. And what we might also think about doing is if the user chooses to have it on at the bottom of the screen, almost like a HUD style. When the user starts speaking, it transcribes it, and if they pause, then you can see list of options of what they might say next that the computer will understand. So like you would say select all and then it would show elements people, you know. So so it would be an augmented speech interaction with visual feedback maybe. So. So that’s one little thing. But just to report because yes, I really want to hear what Bill has to say with your questions. The chat on Friday was based on what you and I talked about, Adam, which is the annotations of colors and logic.

Leon Van Kammen: So I needed an academic to find out what this should be. So I thought maybe calling Mark, but it was a bit late. So I called Dini and she had time. And so what this thing now is called is semantic annotation, just to give it a name. So it’s not just plain colors. And we built up a few things. This would be ready in a day or two. And readers so we can all see how it feels. But it’ll be things like agree or disagree. So there’s different kinds of annotations. Of course. One is for yourself. You’re reading a paper you want to know for the future what to go back to another one you’re reading for someone else you want to tell them, you know, this is a mistake or that kind of stuff. So we developed a list of that that’s called, as I said, semantic annotation. Now, particularly for you, Leon. The other thing we talked about is an extension, what Adam and I’ve talked about a little bit, which is what we call these spaces, you know, we’ve tried it in the, in the groups and it’s been a bit easier. Offline. No, actually, you know what? Sorry. This is. I’m not at all changing the topic. Just give me three seconds. Of so many things. Okay, just listen to this.

Speaker2: Second, one second.

Leon Van Kammen: Oh. What? It’s not playing. Hang on. Maybe because it’s being a. Okay, hang on one second. I’m so sorry. This you should be able to hear.

Speaker6: For this demonstration, you will have access to the ACM Hypertext Proceeding shown here in front of you. Just point to it and pinch your fingers to open it.

Speaker2: Right.

Leon Van Kammen: So based on what we’ve been talking about, I’ve been writing a script. You could hear that voice, right?

Speaker2: Yeah.

Leon Van Kammen: It’s actually very, very good. That one was medium. But I’m going to skip through some things because this is what I think.

Speaker3: Sorry. Was it. Was it a real voice or a generated voice generated. That’s pretty good.

Leon Van Kammen: Yeah. It’s gone. It’s gone crazy. So okay I’m going to generate download. And play the second one to you.

Speaker6: Please point to ACM Hypertext proceedings by pointing to it with your index finger, making sure the line which will then extend from your finger, lines up with the proceedings, then pinch your index finger to your thumb.

Leon Van Kammen: Crazy, right?

Speaker3: That’s impressive.

Leon Van Kammen: It even.

Speaker2: Breathes.

Leon Van Kammen: At certain points. You can hear it breathe.

Speaker2: Right.

Leon Van Kammen: So. So the idea.

Adam Wern: Which which voice generator is that?

Leon Van Kammen: Slate. Rt.

Adam Wern: Not the.

Speaker2: Wharton.

Leon Van Kammen: Yeah. So the thing is it does and kind of paragraph. So there’s a little break between each paragraph. So that’s how we can we can do that. So. It’s Hang on a second.

Speaker2: All right.

Leon Van Kammen: So this would be the first one.

Speaker6: You are starting here in the library. You are now in what we call the reading room.

Speaker2: That’s crazy good. So.

Leon Van Kammen: This should be quite brief, but this is I think we’re now on the same virtual page, and we start with so, yeah, the way that I’m trying to do it is write the scripts. That will be an audio when someone puts the headset on. So instead of us speaking to the person, I’m imagining an avatar speaking this. This is not necessarily something we’ll implement, but it’s to help us make it salesy and explainable. If we cannot explain it through the script, we can’t explain it. That’s the thinking. So first of all, we’re thinking well, Adam and I thought maybe. And I’m sure you’d agree, Leon, actually, what we call this argument, the whole system is called augment. So welcome to augment and extended Reality. You’re starting here in the library. So we’re just tell them where they are. And then we tell them about the hypertext proceedings. And then we don’t say the gun gesture anymore. People didn’t like. So it’s just quite quite simply say pointing with your index finger, making sure that blah blah blah. So we tell them how to point and if they don’t do anything for 10s, it tells it again in a different way. But here are the key things. You’re now in what we call yeah, okay. If you open a document or many, you’re in what we call the reading room. And the idea that Adam has really been pushing and I agree, but it’s been difficult, is that you don’t leave the library. The library is still somehow there in the background, so to speak. Right? But now you’re by reading room. It means you’re in a position where you can really focus on one document at a time. You can have several open next to each other, but each one of them is a focus for that document.

Leon Van Kammen: And then you have things in that space that can influence how you read. We call these elements. And finally, the last terminology is that once you’ve gone through and made all kinds of things, and this is how you want to see it, you can save that as a workspace. Hmm’hmm.

Speaker2: So so that’s.

Leon Van Kammen: Basically what we have right now. And I could talk for another hour, but I will. Shush, shush.

Speaker3: You mean workspace as a preset?

Speaker2: Yes.

Leon Van Kammen: So when you one of the things that I have in author in the map, which I think is transferable logically, is two options. When you’re in a kind of a map view, let’s just call it that, even though it’s wrong. Right? We all know what we mean. One is elements to choose what should be viewed. You know, show high people, show hide LM show hide, whatever. And another one called layouts or a range where whatever you have selected, you can do a line as a column or whatever, and probably under layout is where you would save the workspace. But that isn’t self-evident yet.

Adam Wern: So to clarify when you said hearing Leon’s question here, did you actually mean that the workspace is more like you don’t like the word template, but a template for another kind of that you save. A way of working. Is that what you mean by workspace? I understood workspace as a specific project space. More. More like that.

Leon Van Kammen: I think I think the word template. I think you’re completely right on everything you said, Adam, which is the first. Just kidding.

Adam Wern: But it was a question. So how can how can I be right on the question?

Speaker2: Yeah, yeah. First of all.

Leon Van Kammen: A saved workspace is a template.

Speaker2: Yes.

Leon Van Kammen: Right? Because the thing you are saving is not the document itself. You’re saving everything around it. So you’re saving let’s say you’re like George Washington. He’s very important in your work. So you just have the text George Washington on your physical wall virtually. So whenever you are in a workspace that deals with history and American history, you use the George Washington as your workspace if you’re working on so.

Adam Wern: So a question there. Let’s say we have something resembling tools in XR, XR tools, things that affects other objects, that kind of tools. It can be on you. Resembling kind of a working belt or something you carry with you. Or you can have a workstation that is more fixed to the space. So either it’s on you or in, in a space I’m comparing. I’m doing like physical metaphors here, real life metaphors here. And same goes for the George Washington poster. It can either be content as in something you work on right now. You annotate your your your George Washington image or poster or PDF, or it can be for mood setting or for visual recognition that you know from a distance that this is your work history room and so on. So the same content can be. Can be indifferent or fulfill different purposes. Sometimes it is an object you work with, and sometimes it’s for the mood, and sometimes sometimes it’s for a general kind of a reference thing that you always want to have at hand in any history room. So we need a way to promote or annotate or make the George Washington poster to show that in this room it’s just a poster for mood setting. And in this room, it’s a reference thing that should always be there. And in this room it’s just content. It’s just a thing that you work in. So there are categories here. If if we want to go the route with having some sort of templated workspace.

Speaker2: Yeah, that’s.

Leon Van Kammen: Really, really important. Now. Adjacent to that is the issue of how modular the system should be. One thing we’ve talked about is when you go from the library to the reading room, you should automatically go to the Sloan Andrew stuff. However, you should just as easily go into something Leona’s building and Adam whatever. So we don’t want it to be cumbersome to go from one place to another. I think we should consider that within your body worn equipment as well. You know, we have that sphere, the Andrew sphere from Fabian. I would like some more of those. And I think, you know, when you do different work at home or in the garden or whatever, you may carry different tools. So I would consider workspace an ideal thing for that as well, that I’m now in a different workspace. So I actually carry different tools.

Speaker2: Yeah.

Adam Wern: And with virtual we have a very special situation in that you can copy your tools. I have a. How many how many versions of the same brush or or scissors or whatever. And it could be arranged for each workspace which is very different from real life, where you have to carry around your favorite tools because you you don’t have ten different versions of the same favorite hammer. You have 1 or 2, perhaps if you have one at work and one at home and one at the leisure house. So there is this notion of or we have the opportunity here to do arrange perfect, perfect workspaces where you have. The as. Sometimes the tools inspire specific work. Just seeing the tool is a reminder of of the work you can do. So leaving it in the workspace is is a is a cognitive help for you. And I imagine at this point the workspaces are pretty personal at this point in time. For Sloan, for example, it’s not a shared environment. So so leaving your tools there, your personal favorites is not a problem. It’s good to leave them in the space. And we rather have we could have a kind of more of an export thing or publish thing. If you want to just take the knowledge object, the things you’ve produced without the tools and export them, you can do that. So at this point, I think we should just do different workspaces, leave all the tools that are inspiring or useful in place. And then not. Not think so much about templates yet as it’s kind of we don’t know what we want to template. Oh no. No. Yeah.

Leon Van Kammen: By template I don’t mean pre-done ones unless we think of 1 or 2. It’s really for the user to save over time.

Adam Wern: Yep. But what I’m saying is that perhaps we don’t need that functionality right now. It could be that it’s enough for just to have workspaces, which makes this both tools, things that affect other things. And knowledge objects themselves. There are interactable, but not tools that will freely mix them, leave them in place and just have an export. If we want to have just the content or knowledge object. Yeah.

Speaker2: No, that makes sense.

Leon Van Kammen: And a consequence of what I’ve been working on with annotations for the last half week has been that it started with Mark working on my thesis and being really annoyed at how he couldn’t really comment. And it takes forever to do something and then highlight. That’s why we first started with the annotation. So if I see something in red, I know he thinks that’s wrong. If I see something in green, he’s just agreeing with it. Fine, right? But the second half of that is when I choose to view the document I should easily be able to see show me everything that the person who annotated agreed with. And that’s it. Right. So you have the view and that carries over to the workspaces too. Maybe you have a workspace for reviewing when someone is annotated your work. You know, show me all that. This in a column, all of that in a column. I want to see how it connects. So you do a basic logical layout. So when you get your feedback it is useful in such a way. Right.

Speaker2: Yeah.

Adam Wern: And as usual with documents, there is when you publish, there is this notion of that you remove a lot of the annotations. For some documents, it’s like a Wikipedia. That’s not a good thing. We want to see all the edits and see the history of it for, for but for other documents, it’s more important to remove the history and the internal working material, as it’s not it doesn’t reflect your current.

Speaker2: This is so brilliant.

Leon Van Kammen: Thank you. Adam, because. When you export, you need to remove those types of annotations. No question. Also, you should be able to have added editors or authors annotations that should stay. Right. And that’s something that Vint Cerf asked for when we did some experiments quite a while ago. So to me, it is really, really important that these colors and their semantic meaning is carried across when we’re at that stage. Because if I can imagine maybe in your environment, instead of carefully selecting a word, you literally have paint that you splat on and it will hit a sentence, you know, you do completely different interaction. Brilliant. Those annotations to repeat both the color and the semantic meaning should be attached. So when you go into Fabienne’s room, Alfabia can choose is it the color or the meaning that matters, right? But when it comes to the published, I think we should maybe use the language and call them notes, because one of the great things would be is the author just highlighting these are the important sentences that that’s simple. And when you choose to read, it’s like, I want to see the author’s view of this because they did it for ACM hypertext, which is very verbose and boring. I want to see what the author highlighted, so it’ll just show the sentences, plus maybe some extra text saying, pay attention to this, or here’s a secret link or whatever. Yes. There’s a lot we can put in there.

Adam Wern: Leon. Yeah. Do you have one more thing?

Speaker2: I’m sorry. One more thing. Since the voice.

Leon Van Kammen: Guy. Now, imagine in a document embedding a lot of stuff for an LLM, right? So you write a document, and instead of just a glossary, that is a human glossary you explicitly embed. This is the extra data that if someone reads this with an LLM, I want the LLM to be aware of.

Leon Van Kammen: Let’s say we have a two page document but we have 20 pages of metadata. Brilliant. Over to you. I’ll promise to mute for a few minutes.

Speaker3: I think it’s the all these things are really cool use cases. I think it’s important to maybe categorize them whether to save that into the document. So it’s like like a workspace is a very broad term, but it it could be a library wide workspace, maybe with multiple documents from the from the library. Then it I guess this sort of workspace has to be the workspace configuration has to be saved into the other application or library level. But I can also imagine some things might you might want to store that inside of the the document. Like I think even the last example you gave wrote. So I think. Yeah. I think if we, if we basically have a make a paper or maybe this weekend where we basically say like all these use cases like which should be saved, where and which are the most coolest ones, which are maybe the easiest one to implement. Maybe we could even try one and Yeah. One more thing. What I wanted to say is that I do think this voice both you know, telling what the application is about, as well as speaking back to the voice might be a nice low hanging fruit, especially in the case of search. For example, yesterday I was watching the Star Trek episode, and it was so cool that even in an emergency situation they could ask, like, please give me all the sentence show me all sentences with the words carbon. Or you could even do search carbon. And then the voice would basically respond there are, let’s say 15 occurrences or 15. And I, I have a feeling that this kind of interface, which might not even need an LLM, maybe it’s just a search keyword. And you basically go to, to that particular page and then say next to go to the next occurrence. I think something like that is. Very powerful for people who are not into XR at all. It’s sort of showing like, oh wait, yeah, I can see how this how I don’t have to use a keyboard in VR and I can just navigate text. Yes. Sorry for the long story.

Speaker2: Do not.

Leon Van Kammen: Apologize. That was fantastic. Right? Two things on that because I think that is very, very important and powerful. I’ve noticed I actually had a communicator, a Star Trek communicator once, got it for my birthday, really beautiful. And it was actually a card signed by Jean-Luc Picard. So that was special. Well, the way they work, right? You touch it and you say the name of the person you want to talk to, and then you say the message. I don’t understand why that hasn’t been implemented. I mean, with with a watch, if it’s someone in your favorite, you should be able to say, Emily, can you come here, please? And then understands it’s to Emily, which is defined in my world. So he takes the. Emily, can you come here, please? The whole thing as an audio file and sends it. That is the ringtone. So if the person chooses to reply, then it’s an open channel. But otherwise I can’t just listen in on her. So talking of low hanging fruit, that is something we might do in the multi-user environment here. Make it super cool. Similarly, I think that one of the problems in the world today is loneliness.

Leon Van Kammen: I think that, you know, all of us at some point will feel a little cold in the room. And I don’t think that voice should be used in cases where hands keyboard goes of the brain, you know, would interfere. But I think that being able to go through documents with a voice and a little bit of feedback adds life to it. So I think you’re absolutely right. So, you know, if you come across a word and you want to search for it, you shouldn’t have to highlight it. As Adam has said many times, say the word, just say look up, blah, blah, blah, and the system should understand. Look up as a command. It’s not. Hi darling, how are you doing? Which would be to your friend if we have a set of dictionary things like that, I think it would be wonderful because we do have an expectation. This can be used in a coffee shop, office or at home or home office. Right? They are very different and the system should be aware of that. So yes, many thumbs up.

Speaker2: Yeah, I.

Speaker3: I really agree with that. Also to circle back to the to the features we’re discussing, Adam also said that maybe some of them it’s not really the right time for it. Yeah. I do want to speak for Dini a bit. I do remember that for her, it it didn’t seem like she would really care about space being messy. She she cared really a lot about having a lot of space. So I think we should take the workspace idea with a bit of grain of salt and then maybe already, like, like maybe try to define what is the lowest common denominator of a workspace, like the most simplest way to separate workspaces without much bells and whistles? I think that might already be totally enough. So this sort of like map, which I talked about, like which features should be saved into the library, a library, metadata, and what should be saved on a document level. I maybe, maybe a lot of those might not qualify or are better to implement later. And there are probably some of them which are really which, which can really add some value.

Speaker2: So.

Leon Van Kammen: Yes. And. I think we need to give our voice a name I need we need to give the character a name so we better understand what it means. Because I don’t know how much research there has been. Not much that I’ve seen, and it’s been a long time since I looked between voice and air versus eyes. Right. If we can find the sweet spot where we optimize the looking. But without having to, because not being able to touch the laser hand. You’ve heard the H word I used to describe it. You know, it’s not pleasant. But if we manage to find the sweet spot between speaking and looking, it will be absolutely wonderful. So.

Speaker3: I have one more comment on that. It might even save us a lot of implementation if we just first try to get one command working voice command. Because if you think about workspaces like it easily triggers a lot of implementation where you have to show all the workspaces. But what if you can just say Emily, if that’s the name of the like, save this snapshot or save workspace and go back to the previous workspace. Or like, you don’t even have to develop an interface to show all these things if you can just maybe there’s a very simple way, just with voice commands, which saves you from a lot of visualizations of which workspaces are where, because maybe you even remember that. Hey, when I was reading this book, I made a snapshot I know, so I will go to this book and I will ask for, you know, go go to a previous snapshot or workspace. So it might it might save so much visual programing.

Leon Van Kammen: This is really, really a fantastic call because I have used the headsets now to a frustration enough to feel just what you said. Doing a visual interface in XR is awful. You know, to do it for a PDF style document? Not so bad. You can stick things to the document, but for wider, you know, where is it going to be in front of you on your arm. Of course we need some of them, but I think you’re absolutely right. We can solve a lot of problems by making it really clear how the voice interfaces can work. So I completely agree. We should do some testing. You know, like down, you know, when you look, when you have a focus of a PDF looking not actual but looking document, if you say the word down it should go down a page.

Speaker2: Right?

Leon Van Kammen: Why faff about while trying to point to the right thing in a 3D space? Similarly, if you if that was a mistake.

Speaker2: You should be able.

Leon Van Kammen: To say undo. Right. So we need not an undo for everything, which is an expensive feature to build, of course, because every state has to be stored. Maybe, maybe. But at least it should understand basic things like that for undo. I mean, in in sorry, I got a user email complaining about I.

Speaker2: Can’t say.

Leon Van Kammen: I’m sorry. That is annoying to get on my watch when I’m talking to you guys.

Speaker2: Right? Yeah.

Leon Van Kammen: Yes. Just very happy. Keep going.

Speaker3: Oh. I’m wondering maybe we can circle back to Adam, because, yeah, he basically started this whole voice fever or the voice idea. Maybe. Adam, you have some comments on a voice, and maybe the degree of not or. But to not or to implement stuff. The priorities, I mean.

Adam Wern: Okay. Yep, I bought quite a bit on voice and how it fits in. Some random things from my that I thought about this. A voice, a conversation voice that is a thing you can speak to and that speaks to you. It’s need. It needs to be interruptible. A very important part of a conversation between humans is that we can, like, interject or give information to the voice or to other people while they speak, so we can kind of steer the conversation. Yeah. So we can steer the conversation a bit either by gestures or by sounds or verbal agreements and so on. And you could clarify things along the way. That is a feature we want. It doesn’t address your question really on this one because it’s quite advanced to do. Interjection. But it’s something we really want to going forward in voice interfaces, interruptible conversational and interruptible voice interfaces in general for where voice shines as you user interface is for things not present in the room. The things present in the room, of course, can be made interactable by the regular user interfaces. I’m not saying that voice can’t shine here either, but. But what where voice really shines is for things that are not present. So fetching documents, fetching, searching, fetching is a very good, good use case. Instead of looking into folders or spaces to fetch that one workspace or that one image or or document that you want doing voice search for it and being presenting that as a visual thing.

Adam Wern: The result can be presented visually as well as well. So I’m all for multimodal and especially in XR, we have the opportunity to present to save multiple results. Whenever you search, you can have a small search bubble or object placed in space representing that search that you could either build upon, kind of refine that search, or you can keep it there. And when you’re done, you throw it away or it automatically disappears, goes into the trash bin after a while. But the ability to to keep past results is very important, I think. And when you pick up Siri, Siri is have kind of a kind of amnesia. If you talk to Siri or the voice assistants. If you do the next query, it’s hard to get back to your previous query. They are lost in time and it’s a bit similar to the undo function. This, the state you had a moment ago is also lost in time. It’s not visualized, of course, or often it’s not. So that’s also a good place for voice to search like in the search the past as well. So that’s some random thoughts on on voice interaction. But it’s very good for fetching things not present.

Speaker3: It’s a good point.

Leon Van Kammen: I strongly agree, especially if it’s visually augmented. So you say you say something. Open that thing I saw on Wednesday and get a list of things on Wednesday. And then it’s because you’ve now limited the vocabulary. You can speak the thing there. And also you talked about things in the room. So if you say something and it doesn’t know what you’re talking about, it can ask you, what are you talking about? And then you can point to it. Right. Yeah, absolutely. Guys, just for fun, can we spend a minute thinking of a name for this assistant?

Speaker3: A smart person like in history. Historical. Smart. Person. Hamlet.

Leon Van Kammen: Oh my God, are you kidding? You know why? That’s extremely relevant, right? How much do you know about Doug Engelbart work? In this 1962 paper, the theoretical system he developed was called

Speaker2: Oh, yeah.

Leon Van Kammen: H hyphen lamb slash t. Something learning and something. It was an acronym. And Ted Nelson said, well, how do you pronounce that? And he said, I don’t know. I’ve never said it out loud. I’ve only written it. So Ted said, how about Hamlet’s?

Speaker2: Interesting. That’s quite fun. Sound.

Leon Van Kammen: It’s a challenge because it should. It shouldn’t. You shouldn’t have to say the name every time, like your Siri type thing, right? But still, when you do, this, next sound will most likely be a command. So where you have your mouth matters.

Speaker2: Yeah, there are.

Adam Wern: A few different approaches here. I think having something more artificial or stupid as a representation of it could be better if you had an owl or a dog or something. You talk to an avatar of a dog. You will be more understanding when it fetches the wrong thing. So it signals a kind of not fully human. If we have a human representation, it it may be disappointing to speak to it when it’s not human, and it’s

Speaker2: That’s a very.

Adam Wern: Good in its function, in its functionality. So if the dog fetches the wrong document. I’m fine with that. I know it’s a kind of a dog representation. Or if the clippy Clippy character from word in the 90s fetches the wrong thing, I, I understand it. So that could be a thing for for now. But on the other hand, if we have a kind of the the voice we heard a moment ago, the female voice would it be strange to call that voice Clippy or dog? Or hamlet. Well, she can be a hamlet, of course.

Speaker2: I put on a.

Leon Van Kammen: Link in the chat, too. It’s a list of names of.

Speaker2: Robots.

Leon Van Kammen: And history.

Speaker3: If you think about it, the the metaverse or AGI, they all suffer from this problem. They present themselves as something, you know, next level, smart, amazing, impressive. But. Yeah. It’s always very awkward when things don’t work as. As sort of promised or when they’re not that smart or not that immersive. So. Yeah, it’s it’s smart. Smart smart point to make.

Adam Wern: On the other hand, if we pick something like hamlet or pick a ghost or something, something that is a bit disconnected from us, a historical, maybe a bit mythological as well as the even if a human like it will also, yeah, perhaps fulfill the same thing that a hamlet doesn’t understand our modern PDFs very well. So. So hamlet fetches the wrong thing now and then, so we could get away with that. But hamlet, even though rhyming or speaking on verse. That will fetch the wrong.

Speaker3: By the way, a small observation is I also noticed that voice recognition has actually two dimensions. It has the like Freud said, the conversational conversational dimension these days. But what I also noticed with you know, expert interfaces, they’re usually command based because it’s basically saves time. And for people who use text a lot, you know, have having a conversational interface can be incredibly annoying. So I guess we have to keep both in mind. Basically the command based simple version and the perhaps more, I think in, in shows like Star Trek, they basically always get the command based is that’s the default. And if they want to switch to conversational, they say something like, yeah, please speculate. And then they the computer switches into conversational mode. Maybe we need to

Speaker2: Yeah, keep.

Speaker3: These things in mind. That conversation only might be a bit childish to some very experienced people. They, they get a bit tired of. Yeah.

Adam Wern: But I wonder how much of that is is due to the fact that command based voice interfaces have been fire and and forget you’re seldom allowed to refine the commands or change them or see previews. So I think there is a hybrid where you build up the command slowly and you see previews as much as you can along the way. So you could actually have a good build a complex thing that and where you could start without knowing fully what you want to do from the start, that you get informed during the way. And finally it lands in a command that reflects what you actually mean. This is another.

Speaker3: I am certainly envisioning something I’ve never seen before, but many times with voice recognition, you, when you speak, you see being typed. What they think you’re talking about. But I’ve never seen a system where you basically speak, and then you will see a command being formed on the fly so that that you can you’re also learning what would be the short command of, you know, the story you’re telling.

Speaker2: Yeah.

Speaker3: I’ve never seen something like that. I’m not saying that we should implement it, but I’m just saying that it triggers these kind.

Adam Wern: And that’s why I’m saying it may not be first loan, but it will certainly be the future of interaction and partly about the future of text. And and it’s even more universal than just VR, AR it’s everywhere, as I said, ambient computing or computer regular. Phones and and computers will will have that and the and the tech is good enough but we need UI for it. Really good voice. You know voice interfaces that are that are capturing what we want to have.

Leon Van Kammen: The voice in the Vision Pro is absolutely phenomenal. Siri doesn’t make mistakes in there, so if we have access to that through webXR, it really helps us.

Adam Wern: Well, yes we do. And if we have a we can try it out in Safari, everyone who has access to Safari it works there. So you could the web speech API, the voice recognition part of it. So we have the opportunity to to do everything that starts with the key, with the magic keyword for us could be a search or a command. If we want to, we just rip that off the. Of the Yeah, we strip the magic keyword off the phrase, and we use the rest for search, for example. So that’s one thing I want to play, perhaps play with this weekend to do a a small speakable prototype speech interface. That could be one thing to play with for.

Speaker2: For me.

Leon Van Kammen: So here’s a funny thing. When I worked on a little startup in the Valley many, many years ago this was in the Skype days. My partner, she was in California. I was in the UK on a workday. Whatever we defined it as because of the time difference, we let Skype run all day. So even if we talk to someone else, we could hear it. It was very, very useful. Imagine if we built an app now, this long for a second that does exactly that, but based on Star Trek principles. So imagine that we define ourselves as an active group. We all have an app on our devices. So if I say, Adam, can you have a look at this. It then little note on the screen says, you know, sending a message to Adam in case I need to cancel. It was a mistake right? Then that audio goes to you and you can choose to respond to it or not, but an incredible team building connecting thing. That might be because you may choose to have it speak, you know, play that, or you may be in quiet mode where it’s just text, for instance. That was fun anyway. Leon, back to reality, please.

Speaker2: Yeah, I.

Speaker3: I have an audit at 11:00, so I have to to jump. But it’s really interesting. I, I would be. I would also be interested to look a bit into, like even a simple magic word as a starting point to try it out. Because if you have that, if you have that, then the like every experiment is so easy to to tinker with.

Speaker2: Yep.

Adam Wern: And then we talked about that briefly on the call. I don’t know who with who, which people. But you can also imagine that we have an LM taking that speech recognition and filling in the form, so to say, of a command. Yeah. So, so now I’m my programmer. Bring in the things of the terminal commands where you have lots of either slashes or yeah, the extra sub sub commands that you need to fill in. In a similar way, the LM can do the boring fill in the form work for you for a command, a command that is more refined or complex. So you don’t have to do it yourself. So we can kind of pre fill in the form for you and you just say, okay to that form and or continue to fill in the form with your commands. You can do it conversationally, but it’s very structured in a way for some sort of especially, let’s say, for a search where you want to search both a keyword and time range, for example. That would be an example of I want documents on spatial hypertext. And then you remember that it should be pre 90s, the first occurring or pre 2000. And you add that as the second kind of sub command or another thing that you enter into that form search form. And you, you build build a search with your voice that is com slightly conversational, slightly.

Speaker3: Yes, there there would be your hybrid interface by just adding the LM as a man in the middle.

Speaker2: Yeah.

Speaker3: That’s really cool. Yeah.

Adam Wern: Will probably be a bit slow for the moment, but it’s still an interesting way to to see it will perhaps still be quicker than launching some sort of search form, filling it in. That could easily take 10 or 15 seconds, even in good interfaces nowadays. So saying it in voice and letting the LM process it for five seconds could be quicker anyways.

Speaker2: Yeah, exactly. Yeah.

Adam Wern: Okay. Go to your audit and but we will have the opportunity. When are you coming, then which time are you coming?

Speaker3: Friday, I think on the 29th. So that is.

Speaker2: Thursday?

Speaker3: No, sorry. Sorry. 31. 31. Yeah. 31.

Speaker2: So, Friday.

Adam Wern: Oh, nice.

Speaker2: Can you? All right.

Leon Van Kammen: Text me your details again, please. Leon. Or email or whatever.

Speaker3: I will, I will.

Speaker2: Oh no, I got it, I.

Leon Van Kammen: Got it, I got it here. Okay.

Speaker2: See you later.

Leon Van Kammen: For today if you can.

Speaker3: Okay. Thanks. Cheers, everybody.

Leon Van Kammen: I’m just going to put that in the calendar. It wasn’t there. Oh, yeah, it was there. So Fabian and Leon are coming almost simultaneously. So that’s going to be a logistical challenge. Write a very useful chat so far. I think Fabian may join us. Let me just say.

Speaker2: Yeah.

Adam Wern: So the voice interface may be a bit not tangential, but a tangential in the sense that it’s different from from the other things that we’ve discussed so far. We we, of course, mentioned voice, but we haven’t looked into it in detail. So it could be one of the interesting prototypes to let people understand what it could be for. As I think of voice as a a very important part of XR, but a bit under. Under explore under shown, at least as we think about visuals and perhaps gestures. When we say XR, a bit less about the voice and the audio part. And I’m not saying that I don’t like to speak to my computer all the time. I want to do things with my hands, with the computer, but I would certainly have the opportunity to let it fetch things for me.

Leon Van Kammen: The thing is, when you’re in XR, unless you have a controller or keyboard, your hands are cut off.

Speaker2: You know, the whole.

Leon Van Kammen: Laser pointing thing. I think there is, you know, we’re all nerds in this group, but there are different kinds of nerds. I think there’s the nerd who cares about the capability and the nerds. Who cares about how you can use the capability. That’s, you know, some of these I do very strongly see voice as being more relevant here than on a traditional computer. You know, reaching across with my mouse takes no time. Trying to get a laser pointer working takes time. Yep.

Adam Wern: And for our use case as we work with text, it has a nice interaction with. Both standard and. There is something there is a tighter connection than other Perhaps new musical interfaces or other kind of professional interface or, or architecture may, at least in our view use case with the, with documents and texts. I think voice is closer to the. Its source material a bit closer as well. Yeah. So what do you have any specific ideas for for this weekend? Things that you personally want to explore, that you feel fun things or.

Speaker2: For this.

Leon Van Kammen: Weekend. The only thing I’m interested in is our dinners and lunches.

Speaker2: Yeah, yeah.

Adam Wern: I’m with you there.

Leon Van Kammen: So on Sunday, I’ve invited two other friends to join us for a kind of a early dinner.

Leon Van Kammen: In our field, I have not invited someone outside of the community. These are people you will have met before, even if slightly. I think Björn is coming from the north of Norway.

Speaker2: Oh.

Leon Van Kammen: Ridiculously intelligent guy who’s worked on all kinds of things. Not a super text guy, but he’s enthusiastic about what we do.

Speaker2: Yeah.

Leon Van Kammen: So, no, I have absolutely no agenda work wise for the weekend because.

Speaker2: Yeah, I mean it really.

Leon Van Kammen: We’re all in a sense, control freaks because we all care passionately about these types of things. So I don’t want to get in a situation where we’re like, no, we should do this because trust me, that’s a large part of my personality. I’d much rather hear you say, no, no. For the next hour or two, I’m going to take the headset. Why don’t you go and do something else? I’m going to be at the bottom of the garden. Bye bye. That will be success.

Speaker2: Yeah, and.

Adam Wern: I’m. I’m 100% sure that we can just play by ear here and and find tons of interesting. Things to explore that are both relevant and super interesting and pioneering in some sense.

Leon Van Kammen: Yeah, and also Andrea couldn’t make it this weekend. I did invite her a little bit late, but there are some people in our community who could come over later in the summer, so we might very well see if we do a second one.

Speaker2: And maybe.

Leon Van Kammen: The second one. We do it half organized, half easygoing, I don’t know. Yeah. No, it’s very exciting. And this voice aspect of it really intrigues me now because I’m beginning to see how it fits with text rather than competes with text.

Adam Wern: Yeah, exactly. It should be integrated. And I think too many people and also too many implementations have been. No, instead of having your keyboard or cursor, you get voice and you also get that crappy voice interface that misunderstands you one third of the time, and then you do commands that you really hate. But now we have a better hit ratio, and we could have it. Especially in XR. We can have it more visualized, more previewed, so less of a hassle that it gets it wrong when it gets it wrong. So we have some we have better opportunities to have it integrated in a here and there.

Leon Van Kammen: So one thing that’s kind of exciting is the early voice systems were based on having limited vocabularies, which is very powerful. So I’m thinking if we develop or even if you develop a system whereby what is shown on the screen in XR is made known to the the voice. Right. So for instance, you might have something as simple as oh follow that link.

Speaker2: You will follow.

Leon Van Kammen: The first link on the page unless you say follow the second link. Even you know oh, you know.

Speaker2: Who.

Leon Van Kammen: That site to document. Because of the like to me programing or interactions basically data plus command. Right. So data plus command plus metadata equals opportunity.

Speaker2: Yep. The.

Adam Wern: So there will be endless opportunities for work there, of course. Lots to lots of things to think about. But I remember for just.

Speaker2: Yeah.

Adam Wern: With the prototype. More, more. Can I do with the hypertext thing? Just searching all fields, even the hidden fields, and presenting those as results. With important searching abstracts, searching for when you had it full text searching all the fields with the key And so having more information available behind the scenes and what you see is also very important here. Then of course it will be challenges of, like you said, with the link example to identify things when there are multiple to to to clearly indicate for the system which one you refer to. That is a challenge, but it’s a well worth. It’s a good challenge. So

Leon Van Kammen: Yeah, yeah. It’s getting.

Adam Wern: So do we think Fabian will join us?

Speaker2: He is a few minutes away.

Leon Van Kammen: He said he.

Speaker2: Texted. Okay. Oh, good.

Adam Wern: Also this Friday. I’m having my birthday. I’ve noticed I ran away from my family, and so it happened to be my birthday, so I. I need to take you out to a dinner Friday evening or something. We need to.

Speaker2: I’m going to have to. I’m going to.

Leon Van Kammen: Have to pause the recording for a minute.

Speaker2: Oh.

Speaker7: Meeting is being recorded.

Speaker2: I don’t know. Hello? Well, good. Yeah.

Leon Van Kammen: We’ve just been talking a little bit about the, the, the weekend and we’ve decided on a few things. Friday’s going to be dinner in town, Singapore restaurants, and we’re going to try to find some immersive things. So if there’s anything you want to put on the agenda, just Please do.

Fabien Benetou: Like immersive dinner. What do you mean?

Speaker2: Oh, no. No, I actually.

Adam Wern: I found I searched for Immersive London, and there was one thing I think you went to kind of a prison. You got to the orange prison suit and you said set in cells, drinking beer. So there was a prison setting so half immersive, but maybe not the kind of immersive we were referring to.

Speaker2: Can get very.

Fabien Benetou: Immersive, too. But risky.

Speaker2: Yeah.

Leon Van Kammen: Yeah. No. Yeah. So Friday dinner in Soho Singaporean, Chinese, Asian. The other thing still open. We’ll have a dinner here early Sunday and yeah. Other than that, yeah, we’re looking into more digitally immersive and things like that. There’s some things we’ve seen ads on on the tube, so see what people want. I’ll post it.

Fabien Benetou: There was also the discussion potentially with Odin. For the interaction design. Remember I suggested he would make a presentation, but then he. Because he’s based in London, we can meet him directly.

Leon Van Kammen: Yeah, absolutely. So looking at the calendar here you arrive 12:00, Saint Pancras. Leon arrives Gatwick at one. So I think it may be best if we pick him up by car and you Tube or train to Wimbledon. Yeah. Yeah, because he’s further away or something. But that means that from after two, 3:00 on Friday, we are free to meet him.

Fabien Benetou: I think we say the Sunday, three Saturday 3 p.m. at the Tate Modern was the last suggestion, which he accepted.

Leon Van Kammen: Okay. Can you just write his name again in the chat so I don’t get it wrong? Yes. That’s a that’s a very good place for us to be. Adam, that sounds like fun, too, right after that. Yeah.

Speaker2: Yeah.

Adam Wern: That’s perfect. It’s interesting. I followed him before he was interested in XR. I have lots of his old two dimensional prototypes, gestural interface kind of things. And the animations that I’ve saved down to my computer five years ago and so on. So it’s interesting that he now stepped into Excel, which is even better for the things he built. Then it’s even a better fit.

Speaker2: But, Fabian, do.

Leon Van Kammen: You know where he’s from?

Fabien Benetou: I do not.

Leon Van Kammen: Okay. Because whether whichever country it is, I just politically I want to make sure I don’t say the wrong thing because I do work with Russians. It’s not their fault. Their government is absolute shit. But at the same time, I have Ukrainian family, so you know what his name could be either or and yeah, it’s all good. Just don’t want to say something stupid.

Fabien Benetou: We will say stupid things, but we’ll be cautious on that topic.

Leon Van Kammen: Yes on such topics.

Speaker2: Right.

Leon Van Kammen: So, Fabian, just to fill you in a bit, we’ve gone through a few things. Leon was here earlier. One of them is we are really excited about voice interfaces and XR. Now, obviously, Adam’s been pushing for quite a while. Voice interfaces that are augmented with text rather than competing with. Which is a very important thing. Obviously, we’ve also looked at the language of I’m sorry. What? This. What? This thing we’re building is. So. I had a nice meeting with Dini on Friday. Where we. Okay. Sorry. Annotation is crucial. If you cannot annotate a document in academic, you kind of can’t read it right, because you’re everything has to be in your head. So you have to think about that. So we talked about semantic annotation where you go through a document and if you’re reading it for yourself you will highlight this is interesting or this is bullshit, that kind of stuff. If you’re reading it for someone who wants that feedback, you’ll do it differently. Like this is grammatically wrong. You know what’s happening here? You want to put things without having to write it. So you just do different colors that mean things. That’s where the conversation started. So to be able to save this in a way that can be used by all the different systems we are developing, so we open up from, you know, something that was done annotation wise in Adams world. In your world, your system should know that the user said bullshit or whatever it might be, right? So in order to get the language, which has been a bit contentious for a while done, we’ve been working on a script which the notion is that you have some kind of a persona in the space speaking to you when you get in there, and when you.

Adam Wern: Say we have you is it we the Royal Road or royal we or have you and De Niro? That script itself. I mean.

Speaker2: The script.

Leon Van Kammen: Yeah. No, that’s a very fair point. The script is something I’ve been working on, but it is. Yeah, yeah, but it is something that I’ve been working on based on what I’ve learned, talking primarily to you and then somewhat to Deeney. So I’ve been just doing that and the scripts. It’s really important that I’m not saying we should actually build it. I’m just saying I think I will hopefully have it presentable by our afternoon meeting today. The idea is that. If you want to make a product. Ideally to me, you make the advertising first. If you can’t make an ad that really presents it, you don’t know what you’re doing. After that, it should be either a walkthrough or a user guide. So it’s just that kind of thinking. So, you know, so let’s say you put on the headset and you go into the space and you hear.

Speaker6: We’re starting here in the library.

Speaker2: Right.

Leon Van Kammen: How cool is that? That’s a fake voice. Okay.

Speaker6: You are now in what we call the reading room.

Speaker2: Right.

Leon Van Kammen: So the whole idea is just explaining to the person where you are and the only language that you need to know that I think we’ve agreed on is the place where you have documents you aren’t reading yet. We still call it a library. When you choose to read a document or more, but you’re focused kind of at one at a time. We call that a reading room. Which does not discount having the library in your environment. It’s still there. The things that are all over the place, we just call it elements. And if you say a set of this, let’s say something is optimized for a timeline or something is optimized for having George Washington on the wall as a keyword, whatever it might be. We say that saving a workspace. I think that’s it.

Fabien Benetou: Makes sense to me. One one quick clarification, I guess on on the initial point about annotation and interoperability the, the format or the JSON exchanged. With Andrew Coupleof days or weeks ago even. It does not support annotation because there was no annotation. But basically if you have an element, I guess according to the vocabulary you explicited that could be an annotation. And same sharing like coordinates. That would be enough to at least start. We can keep on adding properties, let’s say rotation, scale, colors, whatever. But but the very basic of content with the position is sufficient to have objects like a document or annotations that doesn’t that that’s that seems straightforward.

Adam Wern: Yeah. On that, I think even more basic or more important to preserve here is that kind of a original link to the source material. So if you do an annotation knowing which. Either position in a document or word or phrase or page or document as a whole to, to preserve that. I’ve been saying that the coordinates are nice to preserve, but the most important part is to preserve its relation with with the. With the source material as our systems. I may lay out them automatically in some sort of reflowed way, for example, and you may have it positionally in a very strict way, for example, compared to the original material. So our. So there the positions are not that important. The important part is that that its relationship with the source material material is very is preserved. Of course. I’m not saying that we should should throw away anything, but the compatibility may more be in that. An annotation has a semantic meaning that it has a semantic or has a good connection to a source. No.

Fabien Benetou: Yeah. Just to build on this, my my suggestion about having a position is the raw, naive one, for example, absolute position in space. But it’s indeed more imp. It’s indeed important to say that’s the worst case scenario. Kind of. That’s the worst way that you can. It’s still going to work. But indeed you must have provenance. You must know what are you actually referring to? Ideally, you let the user, for example, move the annotation in space, but it maintains a link to the document or part of document being annotated. So yeah, that that definitely be better.

Adam Wern: Yeah. And to argue against myself the and in favor of positions, there are situations where you let’s say you do three voice notes or three notes that you write down something yourself, three notes that you’ve created. And if you put them close together, you may not know the the relationship yet or what the whole thing is, but it’s important that they are still grouped together. And then it’s important to preserve the locations quite precisely because you then preserve some sort of.

Speaker2: Yeah.

Adam Wern: Connection between them, even though it’s not refined yet or expressed yet. That may emerge later. What? That kind of grouping is over.

Speaker2: Work. Yeah.

Leon Van Kammen: And this is something we’ll talk about more, I guess, today in our normal meeting and not sure how long that will go on, but I’m glad we’re having this today. And I’m very grateful that you guys talking about the JSON issues, because. This needs to be as object oriented as possible. You know, I would ideally, and I’m sure you completely agree, like it so that almost anybody can come in and change almost any part of this experience. So. So nice to listen to. Additionally, Fabian, we’ve been having some fun thinking about a name for the voice if you need a voice prompt to start it. So first of all I suggested on Friday to Dini that we call the whole system argument.

Leon Van Kammen: And she didn’t have a huge her. I saw that so, you know, it would be nice. Nice to have a name for the system that makes some kind of historical sense, but. In terms of the voice prompt. So most of the time you shouldn’t need a voice prompt in this environment. But when you do. Yeah. Do you have any thoughts?

Fabien Benetou: I think it’s interesting. I don’t like it, to be honest. Everything that sounds too humane and it’s not. I don’t like it. That’s my knee jerk reaction. But I think still in term of accessibility it’s positive. So I think as long as it’s framed in terms of not bringing magical powers to whatever, but rather and not like tricking the user into believing there is more intelligence than there actually is, and it’s for accessibility. I’m happy with that. And so I would frame it this way and in term of naming or convention it’s I’m more into the let’s try and see if it makes sense. So as long as it’s a name, that’s again not tricking the person into expectations, but it’s still relevant. So maybe a play on word on augment and augment Tor or something that is, let’s say sound name ish without losing the function behind it. I’m fine, but I don’t have a strong position on this except that I’m happy to, with whatever help, accessibility. And again, it’s a kind of an egoistic viewpoint. For example, I’ll bring my headset to go through the I forgot the name in English, but to the UK by the tunnel and I will probably work and or play with the headset. And it’s not the same context when I use it while traveling and at home and whatever have accessibility feature will help in such context. So again, that’s kind of my framing of of this kind of discussions.

Leon Van Kammen: That’s what Adam said.

Adam Wern: Yeah, I had a similar on the not pretending to be human. I think I suggested that we use. And whatever. Like it’s better to be a clippy or a dog or an owl or something that is not very intelligent, not too well or a robot or something, a spirit, something when it fetches the wrong documents for you, you want to get that angry at your clippy or or a llama character or whatever. So it doesn’t pretend to be intelligent in any way.

Speaker2: Yeah, I fully.

Leon Van Kammen: Endorse that sentiment for sure.

Adam Wern: So onboarding is different when it’s human writing the script and it’s even a synthetic voice is fine with me then as it’s written by a human, and its function is to inform the user of functionality on what you can do and perhaps do a bit of a warmer welcome than having text everywhere on screen.

Speaker2: That’s really.

Leon Van Kammen: Interesting. I agree, and I think the transition will be interesting too. I could imagine a human like character saying hello, I am blah blah. I represent the future of tech community. I hope you will enjoy this. Here’s how you use it. And at the end of it said either present, pretend that I am the person speaking. I am now going to be living on your wrist as a little spare. Please excuse the fact that I will have a smaller brain, or this android will now take over or whatever. Yes. So the transition to make it explicitly clear to the user, the voice interaction is with a simple system.

Speaker2: Yes, very.

Adam Wern: It’s interesting, the whole onboarding part with I played Tier Kingdom, the Zelda game, the late, well, latest from last year or so. With my kids yesterday. We didn’t we haven’t tried it yet. I bought it earlier, but we just tried it now. And the onboarding in many computer games you have or traditionally you had tutorial sections that are very divisive in the community, if you like them or not. If you want to go through the training grounds or if you want to go into a light, easy section or beginner section and try it out live so to say, in the real, real virtual environment, the the same environment that you will meet later on, but perhaps a bit easier or simpler if you as a. And I’m also one of those persons who don’t like tutorials, at least not long tutorials. They need to be very short for me. I want to play around and then get instructions along the way or find out myself. So I think and as I’ve understood it, many people are like that. They don’t want to go through long tutorials. So the trend seemed to be that you intersperse or have voices and have like, you put it here and there and it’s very done with the light touch. You’re not forced to do things, but they are the the tutorials are kind of placed in the starting environment somewhere, and you can try them when you want to instead of being forced to go through trainings. Yeah. Over.

Fabien Benetou: I mean, I first can you clarify which game you mentioned because I didn’t get the name.

Adam Wern: Zelda Tears of Kingdom? Yeah. The latest is The Breath of the wild was the first one. Tears of Kingdom is a. I played it on switch. It’s only available there and it’s very sandboxie nowadays. So you can combine many things with many things. So it has progressed. It’s interesting in that sense.

Fabien Benetou: Thank you. I, I think indeed, a lot of if I take a silly example, it’s as if you get a new TV and you get the remote, and the remote has 50 buttons and the tutorials, like this button does this, this button does this, this button. After the fourth button, you’re like, I need to use it and then come back and then give me the advanced function like changing the color scheme and whatnot, but like memorizing 50 kind of combos and whatnot. It. The only person who would actually benefit from it would be an already expert. And like, okay, I usually already know all this, so it confirms it all. Or this is like a slight subtlety, but otherwise like flooding the users. If even if the tutorial is, like, brilliant, even if it’s really gorgeous, and then you are back to the open world and then you’re like, how many of those possibilities should I use right now? And I believe the more gradual it is, but within a context, like you learn a move, you do the thing. And then I think a lot of games they do this, for example, they introduce, let’s say enemies or challenge or puzzles that only need as kind of like the minimum requirement to solve that challenge, that basic movement you learned. And then you do it 2 or 3 times, and then you move to like another challenge that if you haven’t paid attention, they can give you the hint.

Fabien Benetou: But if you somehow remember it all but not you solve it with this other kind of, let’s say button combo or capacity or you have. But but if it’s like all in your face, like I just started Elden Ring last week it’s hard, I like it. I think it’s a little bit like sadomasochistic kind of game, but I, I guess I like it, I’m not sure yet. And the tutorial is well done and terrible because it’s well done in the sense that it’s, it’s using kind of the geometry of the level. So if you already know it, you skip it, you don’t ever notice it. If you want to go, you go through. But then they teach you not everything. But like I would say, ten kind of movements. And I’ve done that. And I was like, yeah, it’s cool. And then I’m back in the open world. I’m like, how the fuck did I do that thing? And I’m so lost. So then I now redo everything bit by bit and I’m getting it. But but I think it’s a good example in term of spatial design and terrible example in term of again, I think the tutorial is for people who played this kind of games, like the 4 or 5 before it. Yeah, it, they know it that confirm. And then for newcomers like you you screwed like good luck to it.

Adam Wern: So it’s good time spatial design but bad temporal design in that it comes at the wrong times, perhaps.

Speaker2: Just. But. Yeah.

Leon Van Kammen: So one of the things about Ben we talked about earlier was an NLS augment. When you’re writing a command, you could always hit the question mark key, and it’ll show you all the possible things you can do next. So this there’s two things about that regarding this lower hand. Yeah. One is we also talked about it will be useful if the AI voice has access to what is on the screen. So first of all, if you say open the link, it’ll know it’s the one single link that’s visible. Otherwise it’ll have to ask you. It won’t be like you have to select. Because to me, the worst thing about XHR is selecting. It’s awful. Fair enough. Voice may alleviate that in many sentences. Like instead of selecting the word elephant, you just say look up elephant, for instance. And in terms of the teaching thing you’re talking about, I would love it if it could. If the user can say something like, what can I say right now? If the voice assistant knows what’s on screen, it could then ask a follow up question. Do you mean about what you can see or about how you can get other information, for example? Right. And then it could actually show you a list. The whole idea of speaking and getting reply, not only verbally but also visually, I think is really compelling. Like the thing Adam was talking about, searches like show me everything related to this that I did last Wednesday. And instead of speaking a long list, it shows you. But now the vocabulary is constrained because you’ll pick the second one from top, or you’ll say the title of it or whatever it might be. So it is really, really important. If we’re going to have a voice part of this, to have a voice where the voice listens and pays attention, just like with a human right, and we do not want to do old Siri, apparently Siri in a week will be much more powerful.

Speaker2: But yeah.

Leon Van Kammen: It’s it’s fascinating. And also, hang on a second, I just realized what you guys are talking about with progressive disclosure to teach you what the commands and systems are. You should probably do that about reading a document as well, or reading a proceedings. You should be able to learn the proceedings in a sequenced manner also, right? Like something as simple as.

Leon Van Kammen: You’re looking at the whole proceedings. Show me only papers that are by new authors, as in they haven’t been in ACM before. That should be reasonable, right? So, you know, you filter with this. Anyway, lots of fun stuff.

Fabien Benetou: A little warning in term of conversational agent. What I. Okay, I, I consider myself in cases like this a power user. Like I want to if I believe in the software, I want to use it, let’s say start with the basics or the foundation. But if I’m if I’m really interested in it, I’m going to potentially go all the way to the guts of the software, even the code of the software itself. And that means I need to know its capabilities. And conversational agents don’t give me this, like, hit and hit or miss. Either. I ask a question and then I get the answer and boom, I move on and it’s cool. But if I don’t, I have no idea if it’s within the scope of what it is capable of or not. So for this in term of exploring, I mean, like as an onboarding, doing it like gradual disclosure is perfectly fine. It though cannot be the only way for me. Like I need to have a way to okay, give me even if it’s a list that is barely readable, at least I can see. Okay. Is it one item, 50 items, 100 items? Is it composable or not? And I think conversational only. I don’t see how it could give me this if.

Adam Wern: Yeah, we covered Leon. When Leon was here, we covered one aspect of this, of this is that conversational interfaces have been especially when it comes to more command based things. They have been very fire and forget. You have almost never been able to refine your query along the way. If if there is something you want to search, kind of do a power search for example. So I would really have I would like to have a conversation conversational interface where you can refine fill in the letter LM or something, interpret it, fill in the forms, refine the form for a command or for for a search along the way, a kind of a power user search that is visualized. Its precise in the resulting command is precise. But the way of getting there is conversational and incremental as well. So it’s not fire and forget and and the result is precise. And we haven’t had that. Really. Yeah, I haven’t seen systems like that, at least where you could with search, refine precise commands. It’s either very conversational and interpreted or very, very strict and fire or forget. Single command, single action. So we have the opportunity to to do a hybrid that is a bit more power user friendly.

Speaker2: Yeah. For fun. Shall we go through.

Leon Van Kammen: A few potential names? Not that we’re going to settle on it now. Shall I read what we did earlier, Fabian, or do you have some?

Adam Wern: I think I think we are here. Do you mean the names for the prototype and for the voice?

Leon Van Kammen: I’m thinking the the name of the voice because it’s a very fascinating.

Adam Wern: I think we are exactly the wrong crowd.

Speaker2: For this project.

Adam Wern: You need that kind of creative. Well, I’m creative, but I’m. I’m not really a marketing person or I’m more, as Fabian said, more interested in the functionality. I rather take a half an hour and discuss. Power, user features or interesting interfaces and the marketing naming of things. I agree with the naming is important.

Fabien Benetou: So let’s do the game. Adam fraud will say a name and then we will say yes.

Speaker2: Okay. All right.

Leon Van Kammen: So we’ve written a lot, but I just realized, I really think maybe the voice interface should be a name that is two letters long d v. So just put in the chat.

Speaker2: For two reasons. Yes.

Leon Van Kammen: Okay, fantastic I won. Well, let me see if you agree with the rationale behind it. Number one is it looks not inhuman. Right. So it’s like R2-d2, which fulfills the criteria of not trying to be too human. We can actually just say it’s an acronym for Digital Voice. Also, it has an illusion. The way it would pronounced would be Dave. So that would be an allusion to Dave and Space Odyssey. And personally, I’d like to honor Dave Millard. So if we just say hi, I’m DV. Pronounced Dave. That might be a fun thing. Anyway, moving on to power stuff.

Adam Wern: I asked them. Leon about what he is currently most interested in doing. So it would be fun to hear you, Fabian. Either for. For Sloan or for the weekend or. Or whatever. What kind of is there a special itch that you want to to scratch or something you want to explore? So it’s nice to see the overlap and see well, I don’t want to say the synergy word, but now I’ve said it. The s word.

Fabien Benetou: How marketing of you. So, practically speaking, I think the most interesting part for me, which won’t really actually answer your question, no help. But the perspective I have is I don’t go there to do what I could do on my own at home. So it’s specifically the intersection of what we’re about to do, like what we can discuss or build even together. That’s interesting for me on the weekend like my own interest, let’s say at the moment, I honestly changed daily. There are so many cool things to build and try. So I have, like, zero worry about what we can do and what we can build that that will be exciting. Like inspect ability or introspection of programing in XR. Having the virtual body where you can attach shortcuts and snippet gesture management. I mean, there are so many things. So I’m like, it’s more for a plethora of of things to explore. And that I, at least as far as I know, that are unexplored things. They’re not just cool or interesting. It’s not like any kind of fix or stuff you show up and like, oh, it’s been done before. So I’m, I’m super chill with like improvization precisely because I’m confident there are so many things to explore. And now if you push me and say, okay, but let’s say you need to make a choice of the one thing you must explore or push for this weekend. It would definitely not be rendering. It would definitely not be how to do so efficiently, because I’m betting that that’s going that can be sold and probably not by me also, to be honest.

Fabien Benetou: So. What would it be? Maybe, I guess, being more daring about, like the the design philosophy I mentioned to both of you actually about bringing facilitating more introspection and not hiding the scaffolding, how things are done. I guess I talk about it, but I’m not 100% coherent about it. That, for example, some of the functions I used they can be shown, but I don’t show them. And ideally they should be shown and editable in Excel directly, let’s say even the basic functions of assigning a behavior, let’s say dropping the shortcut on the wrist, for example, that works cool. If you look at the code source, you can see how it’s done. Not complicated. Cool. But if you’re in Excel and you say, oh, when I do this, I want this to happen on both wrists. You basically have to either remove the headset or go on a like shell page and whatnot. But ideally you would see, I don’t know, you would like have a special mode where you circle, you lasso around, or you point at the object, let’s say the wrist shortcut drop in box, and then you show next to it a console with a code that’s attached to it right now. So again, more introspection. And the goal of introspection then is say, oh, I don’t want this behavior. I want another better behavior for my usage right now. So I guess that’s what I would want to push for.

Adam Wern: And That is what made for example, HTML. So, so very interesting in that we had inspect source and that made a whole generation of a web page creators that were not like super coded, but some of them became code coders. They inspected a few JavaScript snippets in in order to get to a fun pop up windows or a colorful interactions or jumping buttons or whatever. And then that expanded and I and certainly for XOR the right now I feel a bit frustrated about the whole kind of going away to blender in webXR that you so much is produced in 3D software. That is kind of a publish. First, of course, you can get the tighter cycle between blender and an XR environment and go back and forth. But it, it’s quite complicated. And also very. Yeah, yeah. I want to have more objects that are created in Xoar. Webxr. And persisted in in Like our workspaces. I really want them to be very that almost all functionality can be created or done in inside webXR. It’s not just a display mechanism for generated somewhere else, but also that it’s at least two way. There.

Fabien Benetou: A quick anecdote that I’m part of that generation or the kind of people that indeed were I guess one of the motivation for me to be working on the web there is, of course, the delivery mechanism. Like, boom, you share a link and even your fridge can can check it, which is amazing. And in terms of like instant how do you say not just yeah. Deliverability is not just to reach a broad audience and a large set of device. It’s also like I build a thing and then I try it on my devices now. Like, literally now or a couple of millisecond for the network to catch up. But it’s imperceptible, let’s say, now. But indeed, the show source, the being able to go through it, it’s super empowering for learners. It’s like you don’t have to know what a web page is. But if you look at it and it’s like you, you can open it up. And I don’t think there are more powerful ways for people to learn. Of course you can just rebuild from scratch and define a new language and a browser and everything. That would definitely be a way to learn.

Fabien Benetou: But you won’t necessarily learn the same thing and definitely not something compatible. So I think it it’s it’s I can definitely relate to that because I’ve went through it and I hope in term of I mean, in general, philosophically speaking, I guess I think it’s important, like it’s no black box, but I want to say even more so in a, in the state of XR, anything innovative that is new, we we are expert and yet we don’t really know what we’re doing. So it’s not like a kind of, oh, I want to be humble. It’s we like I mentioned a couple of minutes ago, there are so many things to explore, like that’s so vast and exciting. But at the same time, like, oh, what are we how do we do this? And I think if we just juggle with black box, sending black boxes to each other, we’re going to manage. But it’s it’s dumb, like it’s very inefficient. So I think I think that’s, it sounds to me like that’s the right way to explore the unknown. Showing the source.

Adam Wern: And probably there exist other halfway points to code, like the visual coding languages that are not just scratch like for kids, but also the, the things that we see in 3D graphics program like node based, both in sound and that are accessible to a wider range of audience. And they have their problems, of course, and that when they get big, they can be just as manageable as code and are hard to inspect and understand. But there is something very interesting about those code blocks that defines what they can be attached to the inputs, outputs. And I see, see that at some point we will, we would like to have something in a, in a like webXR environment that you can have. Do some sort of interactions, like attaching to a wrist. That something is attachable to another object, that it’s movable, that it’s expandable, that it’s copyable that they, they have different properties that are almost that could be expressed in a visual language. And if you really want to look behind inside the under the hood or not, just under the hood, but if you want to take apart the engine and see how it’s done in code, you can do through a so as well. But at least open the hood and the and the change and fix things a bit. That will be a.

Adam Wern: Wonderful to have at some point. And I think software is moving there slowly in like with blenders and so on. So I hope we can get there with the webXR as well. A frame is perhaps one. Of waypoint there as well, but it’s a bit more. Cody. No. Did we lose fruit to lunch or.

Leon Van Kammen: I am fully here. I’m just not, you know?

Speaker2: It’s a I’m.

Leon Van Kammen: Just not ruining by munching. But I’m listening very intently.

Fabien Benetou: I’m liberty. I think also so I’ll have actually to go relatively soon, but Yeah. Me too. I I was at I maybe I even mentioned this during the last future of text discussion, but I was at the research university linked to a hospital to er, a couple of days ago, and I was advocating for them about, for open source. And they work with emerging technology like XR, including actually XR rather and I was insisting with them again, not because of my old philosophical position, but because if, if or rather because they want to innovate. It’s a bit like XR. We don’t know what we’re doing, and yet we’re experts. So we need to explore and they are. How do you say? Holders of knowledge, I guess. So in term of the Sloan project it’s to support academic work and, and whatever affordances can be given to academics without supposing them to even do scratch, without even supposing them to do any coding. If, if there can be ways for them, for example, to get a programmer more easily, like they don’t need to learn the ins and outs of unity, or they raise a black box they can’t use, and then they are at the mercy of, let’s say, Apple’s API on what can or can’t be done.

Fabien Benetou: I think it’s very limiting. Whereas if there are, again, ways for it to be both open source and inspectable scrutable to have some ways to see the source. I think it gives academics themselves who might have ideas like, for example, maybe Disney will have a crazy idea on a Saturday morning at 3 a.m. and maybe Andrew would not be available, but you can poke at it. And when I say poke, I don’t mean like formally, professionally program it. Maybe she can, maybe she can’t, but at least she can dare to consider trying and look at it. And then eventually, I think in terms of being able to push the discussions and, and maybe even long term again, poke at different places and sometimes it will work. I think it’s, it’s quite empowering. So I think it’s not like a theoretical discussion about the power of programing or whatnot. It’s like for academics who want to explore or as a way to eventually do research better. I think it’s a it’s a good onboarding, a good way to to dare tinker. Even if we just have a comment in the code. I think that’s really like super cool stuff.

Adam Wern: Yeah, and we also have that in every office and in every family. Perhaps not every, but many. We have the power user who’s kind of the informal IT person who sets up, perhaps sets up an excel sheet with some formulas. People fill in the numbers or text, but the formulas are made by that that power user, that expert. And in the family we have someone who sets up the networking or or something that is complicated technology wise, or a bit more complicated. And we need to empower them so they can empower their friends and family and colleagues as well. So having a power user mentality where they can easily share their creations with less less technical users is so important as well. And sometimes forgotten that being able to share the creations to, to close people, it’s important.

Speaker2: So I very.

Leon Van Kammen: Strongly support this ethically, morally and practically, even though I’m kind of at the fringes of it. One thing though, as a real world example, the programing that my guys do for me, for author and reader is done in Xcode. Xcode on the surface is actually quite accessible. But not really. I mean, there’s a lot of stuff that I have to send away for people to do, such as change the text in this dialog box. That is quite ridiculous. So that’s an example I think, of, where there should be a layer on top, where changing the variables, changing the displayed stuff without changing how things relate should be literally a pop up or a dialog like a safe place to enter. So I could imagine, especially in Fabian world, that, you know, I tap on a box and I don’t get all the access that Adam does, but if it comes to this one now represents ice cream. Yeah, I should be able to do that. So I’m firmly on board with the importance of this.

Speaker2: So and it can can.

Adam Wern: Yeah.

Fabien Benetou: Very quickly I don’t want to program. I program because I don’t have a choice. Like things are not they don’t exist. So I need to force them into existence. But I’d rather like just dance around and move stuff around and etc.. So I let’s say I put code last like I. But it should still be like a first class citizen. It should still be editable and codable if there is no other choice. But the goal is not to code, the goal is you do the task itself. But if you can’t, you can backtrack, you can go deeper and then you can do the thing including coding. But yes, they should not be like you should not be into an empty world where there is nothing and you need to code it from scratch except if you want to. But otherwise, if you can just do the thing, just do the action and ideally efficiently and with minimal onboarding. That’s definitely what I also prefer.

Speaker2: I.

Adam Wern: Think there is for not an obligation, but a strong yeah perhaps an obligation for us programmers to expose a few variables to the users a bit more like like we have in this film prototype the reading distance. But I can easily imagine, like, ten other things that could be sliders or something in it could be a hidden menu somewhere. But so you have the opportunity as a user to test out yourself and help out the testing as a pro. And I do it for myself sometimes when I need to do change many variables, I try to expose it as live live sliders where I can tweak many things in in place, in the real, in the real thing, and not just through code and then reload but to tweak it live so I can save a thing, a design that I really like. And I think we should expose more of that to the users both for individual reasons, but also for helping out, testing, improving for everyone. So we should do that in, in our prototype if we can.

Leon Van Kammen: You know, they say the best mathematicians are lazy because they always look for some kind of a hack so they don’t have to do the whole thing. I think that probably goes for programmers, too. You know, if IBM was just saying it doesn’t want to have to. But he will do it in order to get the job done, to get the functionality. I think that’s a very, very powerful attitude and we all have different levels of what that means.

Speaker2: Yeah. So.

Leon Van Kammen: Oh, yeah. We’re out of time, actually, or to our meeting. But yeah. So in principle, we need to. Make this open, not just in a check box for Sloan, but we need to make it open so that. Oh, yeah, you have to show this. But just two more seconds. We need to make it open so that anybody can add a component and reroute this stuff. Ideally, I’m not saying we can do it all this this year, but also that it should be easy for somebody to build a thing that the end user can then interact with the system. Right? Yeah. What is happening here from you?

Fabien Benetou: So what’s happening here is it’s also to give you a live illustration to Adam’s point about exposing variables. It’s so the name of my software which is not really software, is spatial scaffolding. And because I think scaffolding are really cool because I don’t think you can build anything complex without a scaffolding, but I think we don’t need to hide it. And I bought so I bought some different Lego kits with electronics, like to learn robotics and programing, including for kids six and above, which I think is very exciting. But I also bought, like, a spaceship, like the Artemis and it has with it the SLS, they call it the launch system with some little like a lot of details, but the one that is obvious and I think a lot of people hate is the scaffolding, which I find amazing. It’s it’s like you’re not going to send anything to space or anything important without a scaffolding. And if you can learn how it works, if you can expose variables, if you can make the code visible, you should be proud of your scaffolding. And I think if we can do this. Both for ourselves, for academic, for any kind of user. It can be empowering, beautiful. And then we can reach new heights. So I think it’s very I’m not finished. I think we never are, but I have kind of this symbolic thing at home. It’s I like.

Speaker2: It.

Adam Wern: And the kind of the Art nouveau thing that in Brussels, there are so many beautiful Art Nouveau houses. And they have this interesting philosophy of showing scaffolding, but making it ornamental or beautiful in itself. So there are there are opportunities to show code in a nice typographical way to do interface elements in a nice beautiful way. So if we can have a beauty perspective on that as well. So it’s not just a, I know that many people are offput just by the looks of scaffolding, because it’s seldom designed, perhaps in a inside an Apple product or in a beautiful Belgium townhouse. It could be, but we could have that ambition as well to, to show, to make the inside beautiful as well. Stylistically.

Fabien Benetou: It’s it’s a really excellent, excellent example. And I think in terms of intersection of things, we couldn’t be without each other. I think fraud. This is, well, for example, where I would need your help where I often say, oh, I don’t care about the esthetics, and it’s not even true. It’s not that I don’t take the time or I don’t maybe have the confidence for it. But I think the example here that you you gave Adam was beautiful. Scaffolding is it’s something, for example even without two beers. But even so, this weekend with two beers, I would really love to explore more.

Speaker2: But I have things.

Leon Van Kammen: To say on this before we go. For sure. I texted you guys an image of a cathedral. And those buttresses sticking out their scaffolds, of course. And they are beautiful. So I think that is really, really important. Also, it turns out the word scaffold is actually French, so that’s a bit of a downer. Just kidding. But so I’m Norwegian and Norway is a very civilized country at the moment. Sweden two, I’m sure, but, you know. And. The reason for that is not because Norwegians are genetically or ethnically calm and clever people. Absolutely not. We used to be Vikings. We did all kinds of things. I would say it’s the social and legal scaffolding that allows you to flourish in some ways and constrain other things. It is really, really crucial. And within a software system too, it is so important because software can potentially let you do everything but every single decision the software designers make scaffolds you in some direction. So having that exposed and interactable in many different ways, both for a piece of tool and environment and as a general thing so users know this is crucial. So I’m really grateful for you showing us that.

Adam Wern: Okay. I have to run now. I have lots of things to do this afternoon. I look forward to continuing this with, with or without beers. This weekend as well. Yeah. I’m now I’m even more sure that we have lots of things super interesting things to do to explore. And perhaps we can do some architecture as well. Go somewhere in an interesting architectural place to to get inspiration. The physical.

Leon Van Kammen: Safe. That’s not a bad idea. Just writing a note here. Boom. Also, please text me or email me or whatever. Drinking preferences.

Speaker2: Yeah.

Leon Van Kammen: Whatever. I have to admit, I’m on the cocktail side of things when we go out, because why drink the same thing you can drink in a bottle at home when we go out. But not everybody agrees. So, anyway Adam, will you be here later today or not today?

Adam Wern: I really can’t. I’m double or triple booked it. It’s Yeah, lots of things, you know? Today, today.

Leon Van Kammen: That’s fine. Fabian, will you be here later today?

Fabien Benetou: Yes.

Leon Van Kammen: Okay. I’ll upload this obviously as well. In the meantime. All right. See you guys as soon as possible and later in the week. Properly.

Speaker2: Bye bye. Take care. Bye bye.

Chat Log:

09:06:02 From read.ai meeting notes  To  Frode Hegland(privately) : Frode added read.ai meeting notes to the meeting.

Read provides AI generated meeting summaries to make meetings more effective and efficient. View our Privacy Policy at https://www.read.ai/pp

Type “read stop” to disable, or “opt out” to delete meeting data.

09:06:11 From read.ai meeting notes : Frode added read.ai meeting notes to the meeting.

Read provides AI generated meeting summaries to make meetings more effective and efficient. View our Privacy Policy at https://www.read.ai/pp

Type “read stop” to disable, or “opt out” to delete meeting data.

09:08:00 From Frode Hegland : https://docs.google.com/document/d/1Hb9fUxP_Hwi4SsJJa5zQgYv8f_djJzVDDaDeZQYAl5A/edit?usp=sharing

09:16:46 From Frode Hegland : How about NLS style always show next options as a bar?…

09:43:46 From Frode Hegland : AI Voice? Doug, Dene? Dave?

09:43:54 From Frode Hegland : Hal?

09:44:28 From Frode Hegland : Some name which is verb sounding related so it can voice speaking wise fit well with a next word being a command..

09:45:02 From Frode Hegland : “What are you talking about?” Where the voice AI asks us to point

09:46:01 From Frode Hegland : Ask question to get lists then speak items on the list?

09:47:13 From Frode Hegland : ‘Picard’?

09:48:21 From Frode Hegland : Hamlet

09:49:49 From Frode Hegland : H-LAM/T system (Human using Lauguage, Artifacts, Methodology, in which he is Trained)

09:50:56 From Frode Hegland : https://www.wired.com/2006/01/robots-3/

09:51:17 From Frode Hegland : Spark

09:52:02 From Frode Hegland : Capek

09:52:56 From Frode Hegland : Albert

09:53:43 From Frode Hegland : Roy (Bladerunner)

09:54:13 From Frode Hegland : Something from R2-D2, C-3PO, And BB-8 ?

09:55:26 From Frode Hegland : Commander or Cmndr

09:59:08 From Frode Hegland : Ed

09:59:14 From Frode Hegland : Ole

10:17:16 From Fabien Benetou : Oleg Frolov, accept 3pm Tate Modern Saturday

10:19:39 From Fabien Benetou : (I bet Russian, studied at https://knastu.ru )

10:22:26 From Frode Hegland : Reacted to “(I bet Russian, stud…” with 👍

10:27:09 From Fabien Benetou : “Her”?

10:31:38 From Fabien Benetou : played what? didn’t get the name

10:36:21 From Frode Hegland : “What can I say right now” and the voice assistant knows what is on the screen

10:42:39 From Frode Hegland : DV

10:51:21 From Frode Hegland : Having some breakfast, will go visually off, but still here

10:59:19 From Frode Hegland : https://www.whitescreen.online/fake-mac-os-x-update-screen/

11:01:29 From Fabien Benetou : brb, just showing you one nerd stuff before going

11:03:03 From Frode Hegland : Mental and social scaffolding is very important too I think

11:04:09 From Frode Hegland : “-14c., “temporary wooden framework upon which workmen stand in erecting a building, etc.,” a shortening of an Old North French variant of Old French eschafaut “scaffold” (Modern French échafaud), probably altered (by influence of eschace “a prop, support”) from chaffaut, from Vulgar Latin *catafalicum.

This is from Greek kata- “down” (see cata-), used in Medieval Latin with a sense of “beside, alongside” + fala “scaffolding, wooden siege tower,” a word said to be of Etruscan origin.

From late 14c. as “raised platform on a stage in a play;” the general sense of “viewing stand” is from c. 1400. The meaning “platform for a hanging” is from 1550s (as a platform for a beheading from mid-15c.). Dutch schavot, German Schafott, Danish skafot are from French.

As a verb from mid-15c., scaffolden, “construct a scaffold;” by 1660s as “put a scaffold up to” (a building).” https://www.etymonline.com/search?q=scaffold

Leave a comment

Your email address will not be published. Required fields are marked *