9th of December 2024
Frode Hegland: All right. Normality has resumed. I don’t know why zoom has kicked me out twice in a row, but there we go. So first of all. Hi, Mateus. Nice to meet you. Hi. Hello, everyone. Thanks for inviting me. So, a cozy group of people here. Yeah, very, very grateful that you’re here. So Yeah, today is a Monday. A normal Monday made special by you being here, so. Thank you. The way we are doing Mondays now is we have about 15 minutes to go through our regular stuff, so. And then, of course, we will do introductions. I’ll just wait a few minutes to see who will join us. So that will be useful. And then we’ll have fun. I expect maybe 30 minutes for your presentation. Does that sound reasonable? And then.
Matas Ubarevicius: Yeah, I think I was preparing for 15, but I think I will go overboard. So yeah, I guess that’s pretty much where I will land at the end.
Frode Hegland: Okay. That’s. Yeah, that sounds perfect. And then we have tons of things to go through. Shall we just start with introductions, by the way, guys? Sure.
Brandel Zachernuk: Yes.
Frode Hegland: Okay. Let’s just go by random zoom screen. So since I’m talking a lot, we’ll start with this one. Mark Anderson, you are top left. Oh, well.
Mark Anderson: All right. Hello, I’m Mark. I’m an erstwhile colleague of Freud’s at Southampton University. My area of interest is hypertext. And through that obviously text. I’ve been part of the Future of text group for about eight years now. And I’m also helping out in the what’s going on in the future? Text labs. Thanks.
Frode Hegland: Yeah, I would agree with all of that. Then it’s me. I might as well introduce myself, especially to you. Macos. I have been obsessed with the text for quite a long time. For two reasons, as I’m sure you would agree. Number one, we use text for a lot of stuff thinking, communicating, and so on, and there hasn’t been that much investment in text tools. You know, I was doing video in the 90s where 640 by 480 was a big thing and all of that and multimedia, they called it, and rich media and rich media is a bit insulting to to text. It’s not poverty stricken media. Anyway, so we have a just ridiculously fantastic group of people here who are in this community. You’re seeing the core today. People come and go that’s it really. So what actually matters? Why don’t you go last in case someone else joins us? Sure. I mean, the man who needs no introduction. Please introduce yourself. Hello, everyone. So it is I, Fabian.
Fabien Benetou: I’m a prototypist, which is my excuse to say I’m not a real developer. I don’t do a lot of the things related to real development, namely maintenance. I think it’s too complex to be honest, and it’s my excuse also to play with gadgets like this, like the little pen to sketch in 3D, to do it in 2D. But my main interest for this group, and with Pixar in general with all those headsets is because I have a bunch of notes and they are literally everywhere in my wiki, my paper notebooks, all this. So I’m thinking naively, that if I had my notes literally everywhere in space and I could grab them, bring them with me on the plane, that would be most interesting. So that that’s my focus. And in terms of prototyping, I can show you something just on this weekend just to say that it is really in space. So that’s my ice skating, and now it has a lights, and I’ll put a couple more there to be able to go around the rink and drawing space, but with my feet if I don’t fall first on my face.
Frode Hegland: Thank you Fabian. I’m glad you mentioned the ice skates. We went ice skating yesterday and we just used the skates they have. And first of all, they’re very painful. And secondly, the actual blades are not very long, which I find quite interesting. You know, we are tool people here. So the difference in length of a blade makes such a huge difference to what you can do. So yeah good little example there. Andrew you’re next. And good morning.
Andrew Thompson: Oh boy. It’s me. Good morning. I’m Andrew. I’ve done a lot of the webXR development for the year. One part of the project. So lots of webXR stuff. Specifically exploring how to read documents in a sort of unlimited space and then learning that unlimited is actually not what we want and adding restrictions to things. So it’s been a very interesting year. And if you stick around, maybe you’ll see some of the, the stuff that we’ve been working on.
Frode Hegland: Absolutely. And Danny, my co-pi on the other side of the world.
Dene Grigar: Hi. I’m Dene Grigar I’m lead pi of this project that’s overseeing the symposium, the book, these meetings, as well as the VR, XR projects. So I’m at the University of University of Washington State, University of Vancouver, which is located across the river from Portland. So not the big Vancouver, but the little one. I also in the director and founder of the Electronic Literature Lab, I do a lot of born digital preservation and conservation work. I’ve been working in Tex for probably 40 years. If you think about the the first degree I ever had. And but I have done everything from writing straight texts Text to doing multimedia performances. Doing net.art. Hypertext literature. And now on to preserving all of that stuff for everybody else. And it looks like I may be working. Andrew with the British Library on the generative generative project. So that’s kind of cool. So anyway, so lots of cool things in the lab. Andrew and I have been working together now for, what, 4 or 5 years? Six years. And I’ve known Mark forever. I think it feels like it at the same amount of time. And then, of course, Rob, whose work I have preserved, I have his collection in my museum.
Frode Hegland: So Danny with the British Library, does that mean more excuse to come to London, I hope?
Dene Grigar: Well, probably because I like to go there about twice a year anyway. Right.
Frode Hegland: Perfection. Karl.
Karl Arthur Smink: Hi. My name is Karl Smink. I am a XR Developer. I have a CS master’s that I got from Mississippi State University. I then went to work for the Army Corps of Engineers for three years, making VR software for simulations for them, and decided I wanted to go back to school and work on a PhD. And that’s what I’m doing now. My research is on text entry in XR, specifically VR, which is how I got stopped into this group. And I am hopefully going to make some big progress on that next semester now that I’m done with my coursework.
Frode Hegland: Brilliant. And, Rob, as they say, definitely not least.
Rob Swigart: Well, it could be leased. I’m Rob Swigart. I’ve been using text all my life, and I still use it to do writing. Mostly fiction, sometimes journalism, sometimes doodlings. And I’m still trying to figure out why I’m here. In what way? I can affect or help along the future of text in 3D space, unlimited speed, 3D space. I think it poses some interesting challenges for writing.
Frode Hegland: Rob, there are millions of reasons why you’re here, but I think you said one of the best ones right now. You have such a way with words. I’ve been using text almost all my life. I love that that’s pretty pretty spot on. You’d think more people would be interested in text, but it’s still a bit esoteric, so Matus welcome. Please introduce yourself. We have a few minutes of other things, and then we dive into presentations in.
Matas Ubarevicius: Yeah. So, yeah, I’m Matus, I’m also guilty of the same thing. I’ve been using text for all my life, and I see now my daughter using that. And And how is she develops her brain when she is starting to read and read books and it’s changing so fast, then, you know, like till that point, it’s a bit. Yeah, just well, just interesting to see how that affects the development of a child. But yeah. So I’m Mateus, I’m a software developer. I studied architecture and my background. I have master’s degree in architecture. I studied in Delft in the Netherlands, and I switched my career to being a software developer. I’m a software engineer and the founder of Bit by Bit dev and yeah. So I go a bit back and forth from architecture, from 3D to programing and then yeah, I then go into programing deeper, and I just love doing combining those two worlds of programing and 3D and yeah, architecture as well from time to time. But I’m not a pro like I don’t develop projects, I develop software now, and that software is being used by makers, people who are interested in developing 3D applications on the web.
Matas Ubarevicius: Basically what I do is a web platform that allows you to combine various CAD kernels, computational design kernels, and use them with popular game engines like three.js and Babylon.js. So probably you use those as well with your webXR experiments. And yeah, today I will just briefly introduce you to the tool. And yeah, about me, I have also like just recently started another company with my partner in the Netherlands. I lived in the Netherlands for ten years, and then we moved back to Vilnius and based now in Vilnius with my family. So we live here. But, yeah, there are still relationships a bit with the Netherlands. And. Yeah, so we started a company where we will do more work on architecture itself, engineering and construction markets. So bit by bit is a bit more generic tool that allows everyone to use it, and we will be applying it more in the in the context of architecture as well. So it will be interesting times. But yeah, today I will just talk mostly about bit by bit and how it works.
Frode Hegland: So you’re originally from Lithuania then?
Matas Ubarevicius: Yes, I’m originally from Lithuania. Yeah.
Frode Hegland: Where are you now? Are you there in Vilnius?
Matas Ubarevicius: Yeah. I’m there. Yeah, I’m in Vilnius now.
Frode Hegland: I just looked it up on the map. It’s not that far away from us.
Matas Ubarevicius: Yeah.
Frode Hegland: Yeah. That’s awesome. I have friends, but I’ve never been. So That’s lovely. Randall. Good. Good timing. We just did the introductions. Matas did a brief introduction. He will do a presentation soon. Would you mind doing a brief introduction of your esteemed self?
Brandel Zachernuk: Okay. My name is Brandel, and I work on Vision Pro. And I build the web standards for spatial computing there. So I’m proposing and designing the HTML model element. And I work on the webXR specification along in committee with folks at Meta and Samsung and and Google. And that’s actually it. Now, unfortunately, Mozilla and Microsoft are not part of the conversation, which is a shame. But yeah, I joined this community a couple years ago to kind of prompt their interest in spatial computing and and how what to do with the space for building putting information into into space, you know, when you don’t have a small pixel grid and motivated very deeply by the idea of embodied cognition and extended mind, these ideas that having spaces that tell you what to do with them is immensely powerful in terms of being able to remind you of what you’re there to do and what kind of capabilities you have. So it’s a it’s a very architectural idea insofar as spaces are encoded with the specific meanings that you have there. So it’s cool to hear that you’re that one of the goals for bit by bit is that. But yeah, that’s me.
Frode Hegland: You didn’t join the group, Randall. And the best possible way you came as a pirate and completely changed the course of the group, which has been to our benefit.
Brandel Zachernuk: I hijacked the group. I think that’s not unreasonable to say. Yeah. That’s fair.
Frode Hegland: You subtly changed our direction from flat text. You say, Marcus, when Randall joined, it was a little before I finished my PhD, and I was all about this one thing. You know, you’ve got to be focused for your PhD. And then Randall said, have you thought about adding this dimension? And if you said it a month earlier, I would have said my head would explode. But it was fortuitous timing. So now the general feel of the future of Tech’s community is. Of course, there are many types of texts we can deeply care about, including pencil and paper, including digital pencil on digital paper. But in a way, we’re now looking pretty much as the 3D space being the base and everything else as a useful flattening. So it’s changed everything quite a lot. Jesse, I’m glad you made it in your commute today. From what was it? Bologna to Cessna. I thought Cessna was just an aircraft.
Ge Li: No, it’s just the right pronunciation is Cesena. Okay. It’s a very small city in in Italy.
Frode Hegland: That’s really cool. Trivia. So. Yes. Do you mind doing a brief intro?
Ge Li: Okay. So my name is Jesse, and currently I’m a student in Bologna University. I study in Computer science. Specifically, my major is called Digital Transformation Management, and my interest is like kind of a mixture of what is it like the the future of text and the, and also the artificial intelligence neural network this kind of stuff. And I met I think almost everyone like recently from from Port Portland of Vancouver. So, yeah, I’m very happy to meet everyone again online. And today the weather is so bad in Italy. It’s raining, raining all the time and the weather is so cold. I don’t know why, because the last year is not like this, but this year, maybe it’s kind of weather, climate change. Yeah.
Frode Hegland: Yeah. But glad to see you safe indoor with a blurry background, so that’s good, right? Yeah. I’m very happy you guys are here. So, first of all, this call is, of course, recorded. It will go up on zoom where it will be transcribed by Sonix. I, I will just assign names. If there are any mistakes there, I hope to catch them. And then we put that into our record and Mark Anderson in a few minutes, we’ll talk about what he’s done with the record that we’ve had so far this year, which will be part of kind of our 15 minutes before presentations. So by way of introduction, I do have a few notes. Number one is Brandel, you mentioned extended cognition, which is, of course, a very important part of the community. And I noticed the other day, someone was counting with their fingers, and I thought that was kind of beautifully interesting because it’s that extended mind, you know, it’s connected to the brain, but it is kind of outside. It’s a nice border case because we have discussed before that if you look at your watch for the time, is that really part of you or is it an external thing? So I just thought that was amusing. Anyway, since you brought it up, Randall. Two other things. Today, actually, today is the 56th anniversary since Doug Engelbart demo.
Frode Hegland: So I think that’s a really nice macOS that you are talking about extending our workspace today. So I think that’s really beautiful. And finally, I had lunch with my mother today, which is nice. Don’t get to do that very often. And it was relaxed enough that I could actually try to explain what we’re doing. So I’d like to spend a minute to explain back to you guys who are doing it what I told her, and it was very simple. It’s basically we’re trying to make it possible to move units of knowledge And the second half of that, which is a bit obvious, is until XR. We’ve always had someone else own our background, our top view, our desktop, whether it’s the finder on the Mac or whatever it is. So it’s always been a bit difficult to provide a new environment. But what’s amazing now to me is as a designer, I can and I think what we’re doing here allows any other designers to do the same thing. Basically, build an entire knowledge space for people and a knowledge base that if they don’t like one, they go to another. And the final thing I’d like to tell specifically you about that is we’re currently looking at the icon perspective of doing that, meaning anybody should be able to move like an icon on a desktop doesn’t mean they can open what’s inside the the document.
Frode Hegland: So that means that if you make a what we call a knowledge sculpture or a volume of knowledge, and let’s say something I designed and then maybe go something you’ve designed. It should be entirely open, easy and possible to do that. But of course, if one of those little knowledge objects contains a specific 3D model or Photoshop file or whatever it might be, it may not be openable. And I’m saying that partly as an introduction matters to what we’re doing, but also to highlight to all of us how absolutely, ridiculously important this is, because you can’t move knowledge today easily. Just look at, you know, document incompatibilities and look at all these things. All companies want to own the knowledge bits. So if we meet together in ten years and we’re not able to put on a headset or projector or whatever and share things, we’ve failed. But at least now we have the opportunity to do that. So I’m very grateful for you to come in here and really help us look at more of the dimensional aspect of that. And on that note, unless anyone has any questions, I’ll give the focus over to Mark Anderson for a minute or two. Okay, Mark. All right. Yeah. You’re still muted of course. There we go.
Mark Anderson: And of course having just recently moved computer it’s now asking for some wretched showing justification. Oh come on. Oh right. I have to quit and reopen. I’ll be back in a sec.
Frode Hegland: Okay.
Mark Anderson: Ridiculous. Okay.
Frode Hegland: Yeah. Zoom is doing all kinds of weird things with me today. But yes. Robot Mark. Good point. So while we’re waiting for Mark to do the restart, there’s been a little discussion that’s very pertinent to this. We can talk about later. Just to throw out the question, what do we call the things of knowledge in space? We’re currently having a discussion between map and volume, which both have their places, and it should be something that would make sense everywhere with when you have a headset on or when you’re on your computer or whatever. So I just thought that was an interesting thing to mention, to see if it sticks to the back of anybody’s brain. And here’s Mark again. Yeah. Mark go ahead please. You are sharing, but I think you might still be muted.
Mark Anderson: Apologies I came in. I came back in muted. Can you hear me now?
Frode Hegland: Yes, yes.
Mark Anderson: Okay, right. I’ll go very quickly, because possibly the meat of this can get shunted to a Wednesday meeting. But this is about the record of I’ll put you where I’m not looking sideways at people and actually end up looking at the pictures. What I’m looking at here is that Fred very kindly made a summary of basically transcription and other information that we’ve collected over the last year of recorded meetings. And for instance, these are the AI generated summaries. And I looked at this and I thought, okay, here we have immediately the problem of working in long form so that the content starts on page 25 of the document. I just thought, wouldn’t it be more useful if we turned this back into addressable elements of text? So he very kindly made me a plain text version of that. So I’d been through, cleaned out quite a lot of typographic cruft that’s come in weird symbols and stuff that’s turned up that we don’t need. And then I basically imported it into a tool called tinderbox, which I have to be familiar with. So effectively, you’ve got a meeting, then I’ve broken out the subheadings, and I’m in the process of of extracting information about those so that we can start to query the information like so. So what’s been our ongoing sort of summary of for instance, agreements and disagreements.
Mark Anderson: Are we still talking about the same thing. Are we talking about different. And I’ve just about got as far as being able to essentially this is just HTML here, deliberately not styling it, because my approach is to never put any style in this or bar the bare minimum until you know you’re styling the right data. Otherwise it’s just wasted effort. But all the this, for instance, at the moment is, is one element of information that I’ve got stored in this essentially in this database. The aim is to pull out even more. So, for instance, these are, these URLs are actually, for instance, being if I pick another one that’s got. So I’m basically finding these and extracting them and storing them so we can more usefully use the data. And the reason for laying this in front of you is a couple of things that are interesting is it turns out, as time has gone by, that the prompt used to summarize from the transcription has changed. And in fairness, the transcription itself has changed. And lest we realize too much Fred, bless him, has done an awful lot of work in the background just generating this. So the point of raising issues here is, is not to sort of roll my eyes and say, oh, it’s missing X, but to say, going forward, what can we do usefully to our our record? The couple of things that strike me immediately is the summary.
Mark Anderson: The prompt for every summary that’s made should be recorded. If it unless it’s not changed from the previous one. So then we have a record of all the prompts which can be cross connected to the summaries they create. If the process from transcription through to summarization changes that similarly wants to be treated as a loggable element, so that we know that, you know, we were doing the same process for a month, and then we changed it for some reason, which we should write down again. Then all this can be cross-connected. So when you start to look at the information that’s that’s actually contained in these notes, then we can, we can actually do something more meaningful with it. And for instance, we can do some of these things pulling out names and provide a glossary of things. Another interesting thing that came out helpfully added the glossary, which I think came out of his thesis into the end of this. And in going through and sort of I went through an interestingly, because there is no it’s hard to spot typos in the the small box that’s used for doing glossary entries and authors.
Mark Anderson: So there are a few things there which I’ve cleaned up and I can pass back to him. But it’s interesting. It made me think about the fact that if I write a glossary and some of it’s written in the first person, understandably, given where it comes from. It doesn’t make sense when viewed as a collective item. So that’s something to think about there. And our use of glossaries and and how we and how we state things. So it makes sense to the reader. And really, that’s the gist of it. And I’m just putting it out here in case this prompts any of you to, to think well, wouldn’t it be useful if I could find X, or wouldn’t it be useful if we were recording this extra bit of information? So at the end of the project for the for the Sloan grant especially, that we can pull together stuff and we can have for the in a sense, for the reader of the information we provide, we can have all sorts of interesting insights across it. And I’ll stop there because I don’t want to detract from time. We want to put into having the the presentation. But I’ll just ask before I stop sharing. Are there any immediate questions?
Frode Hegland: I have an immediate comment and that is thank you. We’ve been talking about doing this kind of stuff for a long time, and it’s relatively easy for me to just quote unquote, dump it somewhere. I’m very grateful that you are now working on making it useful, so that we can start building visualizations and queries and systems and seeing what we’re doing. So it’s amazing. Grist for the mill. Thank you Mark.
Mark Anderson: Before I forget. One other quick learning thing is always start with plain text and save the plain text before you do anything to it, because you’re only going to be adding crap that you don’t need when you want to do text analysis. Lesson learned the hard way, right? I’ll stop at that point. Thanks.
Frode Hegland: Plain text for plain sailing, right? I see you have a thumbs up. That’s just a thumbs up. Right, Fabian? Cool. All right. Marcus. Go ahead. We are very ready. Eager. Running over time, of course, but there you go. Sorry about that. And Please.
Matas Ubarevicius: Yeah. So I’ll do my best to be quick. Let’s start sharing. Yep. You see me? Okay. Yeah. Do you see the slides? Yes. Yeah. Okay. Perfect. Yeah. So bit by bit. Dev, it’s the project that I will introduce today. And I tell here that it’s a platform for easy programing of geometry. It’s a small lie. I mean, programing geometry is never very, very easy. You probably know that quite well from your experiments, but we do, like, I do my best to sort of make that easier than it otherwise would be. And you can find bit by bit on x I post there quite regularly. Once every week more or less. And I’m not very huge user but yeah, I try to do that. So yeah, I already introduced myself, so I’ll skip that to save some time. We will talk today about what is bit by bit. I’ll then go into demos just to show a little bit how it works. And I understand that I have more or less an half an hour, so maybe I’ll spend a little bit more time than on some parts. But yeah, I think it will be brief. Then I’ll talk. Why I call bit by bit, the platform. Why? Why? I think I try to think about it as a platform. Then I’ll just briefly touch on this topic about CAD and text and webXR from my personal experience. Definitely not as involved into webXR as you guys, but I did experiment with it. I did had some yeah.
Matas Ubarevicius: First impressions on yeah. How text or how not having text is yeah. Affecting the experience. And then yeah, the rest is for discussion questions, answers whatever you will have on your mind, I’ll try to answer. So what is bit by bit and well, it’s a web platform that helps develop custom 3D experiences and its built in integration with game engines, and it just allows you to code geometry in a more natural way. So for my architecture, I’m used to tools like grasshopper and CAD modeling tools like Rhino versus Autodesk products where you have a lot of possibilities. But on the web, that wasn’t really available when I started this project. And my goal was to just make it easier, you know, to have CAD algorithms available to me as a developer and to build products out of that. So, bit by bit was born as a as a platform, as a system, as a collection of components and algorithms from that sort of need personal need more or less even. Yeah. So bit by bit tries to provide also tools that allows you to code tailor made designs for the era of personalized products. And by product I just mean a design, any type of design. Nowadays it’s really, you know I’m sort of trying to think about humans personal interests, and everyone has different kind of interests and different kind of need. Even if you buy a furniture, for example like a table, you know, it would be nice in the shop to say, yeah, I just need 155mm table, because that’s the only piece that will fit perfectly in my in my kitchen.
Matas Ubarevicius: So this is the goal of bespoke to allow us to build these kinds of products more and more that are customized to our personal needs. So we like what I did is I basically built tools that help me sketch the ideas, but also help others to play around with code. And I’ve started with Blockly, and I thought first that I will just make a tool for kids to play around with simple geometry. You may notice also that if you open the editors of Bit by bit, like in incognito mode, you don’t need to create accounts or anything. You can just start using those right away. And all of the open source algorithms are just available. Basically, most of the things that it can do, you can just do it because everything is available. And that comes from the need because like in European Union, we don’t really allow kids to or we try to protect kids from leaving personal information all over the place. So allowed those editors to be used by everyone without logins. And if people don’t accept cookies, basically, I don’t even know that someone is using those editors. They are quite sort of disconnected from the platform. But then, of course, if adults or parents want to make accounts, then that’s of course possible. Then they have to agree to the terms and all kinds of things.
Matas Ubarevicius: And I then can share also that code, because I can monitor that there is no bad code on the system. And yeah, so this is a little bit I’m giving a little bit extended version of these explanations. But so on the on the bit by bit there are three types of editors. One is just to directly code the TypeScript. And in any way like we have a lot of npm packages that you can just take and integrate in your websites where you would code JavaScript or TypeScript, whatever you prefer. But we have also on the website just ready Monaco editor to sketch your ideas, you know, experiment really fast and get the results. Then there’s blockly. It’s the tool that is similar to scratch quite familiar for kids that learn programing to sort of design those blocks together. Yeah, it’s just basically a programing. But then by combining bricks in various ways, that is also sometimes more intuitive for people. Then there’s Reed, and Reed is a sort of grasshopper type programing environment. I don’t know if there are architects among you, but that’s quite familiar to people from parametric design space. This style of coding where you just join algorithms with wires, you sort of transmit data through those blocks to create your design. And I’ll just briefly show you how those editors work today. So these are three editors, but basically the code base beneath those editors is all the same. And the components for graphical editors are automatically generated basically from the API of the TypeScript layer.
Matas Ubarevicius: So this is a little bit short introduction into these editors. And yeah, I thought I will not go too much into details. Later on I still have some slides, but I will first just try to introduce you to bit by bit dev. And it’s just also important to note that bit by bit dev is also the web address of the platform. Here you are greeted by this parametric object that is floating and I will just move a little bit you guys to the side. Yeah. So it’s simple spiral. It sort of represents to me a sort of infinity of possibilities. And you can already play around with it here like you can, you know, make it less wide, more wide have it more dense. You can also just control how many spikes it has. So it’s all parametric. And it represents a little bit what you can do with with bit by bit. And of course, you know, things like just changing colors in this reactive approach. And here you also, of course, will find the same information that you saw on the slides with those editors. What I will do is I will just go ahead immediately and I will just start read editor to show how it works. So what I will do is I will click the right mouse button, I will type cube here, and I will find this command from Assist Shapes. That means that I will be using open cascade kernel get kernel and I will drop this component.
Matas Ubarevicius: And you see that it’s on the screen. I will change its size now to three. And to make it so parametric means right that you can control various kinds of your geometry through some sort of input like a variable maybe. So I will create a number slider in the same canvas, and I will connect it to the size input. And you see now that with this slider I can change the cube. So it’s now reacting. And in that way I created a parametric relationship between the size and the number slider. If I swap the canvas I will enter this space 3D space where I can look into this object. So these editors are great if you’re still learning to code, right. You can experiment with all the algorithms that we provide. And another thing that I will do real quick is I will fill it the edges of this cube. So just for the sake of the demo, I will hide the original. This is what Fabian wants to be improved a little bit, but yeah, let’s see. So now I have rounded cube. I will change its radius as well. And you will see that if I change the original size of the cube, this whole script adapts immediately and the fillet gets applied and I can even go ahead and say that I want to also control the film itself. So now I will say, okay, the maximum of the slider has to be.
Matas Ubarevicius: I’m not sure if you see that. Well, but so the minimum is zero. Maximum is one. I will put it on 0.5 and I will connect it to the radius. And now I’m controlling the radius with this slider and those parametric system sort of the interact. And you have the working model. So the same approach can be extended to create many things. You know like yeah. These houses I will show a bit more later. Here you see the same code, the same exact thing. I will just run this code. This is coded in TypeScript editor called Monaco. So here you see as well OCD shape solid create cube. And here I apply the fill it and if I enter the swap the canvas, I enter this 3D environment where I can again look into my model. And important thing as well is that if I click on this link, I will immediately go to the documentation of TypeScript. Basically you can then go and understand what kind of inputs you can use in both in read editor and in TypeScript and as well in Blockly. So everything is sort of related. And now let’s go into Blockly real quick. I’ll just hit run. So here you see a 3D text maybe that’s also interesting is that we can create text in Blockly, but Blockly is a bit different kind of environment for coding. So here you can compose blocks and decompose them like that.
Matas Ubarevicius: So you just drag that into the input, and then you drag it again to another input. And in this way you compose those blocks. So not anymore with wires, but more with this sort of dragging and dropping the components into the right places. So here you can of course, change. Yeah. I don’t really have too many fonts that I support because the fonts are quite hard to make workable with 3D, but yeah. So here I changed the font. I can also change, of course, the text. So future of text lets it run and you see that script executes, the camera goes to its original position and you have yeah, the text here in the 3D scene. So other aspects, of course, are possible to code. What I want to do real quick is I want to also create the cube. So I will go to this OCD category. I will go to shapes solid primitives and I will create a cube. The cube that we already saw. I will delete this component from the canvas. I will use this to be drawn. Now just hit run and again you see that the cube was created and now if I go back, I click right mouse button on this component and I click help again. I go to the same TypeScript documentation about this algorithm that explains a little bit what it does, what kind of inputs there are. So this is a little bit how the system is related to each other behind the water, and how you can code it in different ways.
Matas Ubarevicius: And everyone prefers, you know, different kinds of approaches. So here Blockly is maybe better suited to people who want to actually learn programing at the end. Reid, for example, is also quite nice for professionals because it’s so reactive. You see things happen real fast. So coding ideas, it’s really fine and read as well. But like if I, you know, have a client and I really need to code out the big application, then I just turned to TypeScript and I don’t yeah, don’t use the editors too much, maybe to sketch some ideas and experiment with, with with the concepts. So yeah. And for kids I think. Yeah. Read and blockly are really fine to use. They. Yeah. Just visual, you know reactive inputs are like. Yeah, easy to, to change, like colors. You know, if you would go just real briefly back here to the color let’s just here with color, you will see that that component has the color picker immediately available. So it’s just easier to, to code, you know, if you’re a child. So yeah. So this is a little bit of introduction to those editors themselves. And now I wanted to show a little bit, you know, not the cube anymore. I think cubes are fun, but, I mean it’s possible to create other things with bit by bit. So here I have a parametric. Yeah. Oh, yeah. Fabian, if you want, just shoot your question. Yeah.
Fabien Benetou: It’s not a question. It’s a quick remark that I’m very happy to see some of the comments in the chat that basically are saying that it’s beautiful and it makes sense in our context because I lied a little bit to you. I said, everybody will get it. But I was a little bit afraid. It was remote from text, but obviously from the kind of response. People do understand the connection that the texts are within space. The spatial extent needs to be shaped to. So just it’s perfect. It makes a sense in our context. So I’m happy of it. Please go on.
Matas Ubarevicius: Yeah. No. Like. Yes, sure.
Frode Hegland: Oh, no. Sorry. I was just going to say now, now that we’re going to this next step. And thank you for Bobby. And that was great. Brandel has asked a question here in the chat that this may now be a good time to address. Before we go on to more details, is asking what the guts are, if it’s basically CGI or a headless Houdini, etc.?
Brandel Zachernuk: Yeah, I mean, it’s kind of inside baseball detail of how you build a computational geometry application, whether you’re using CGI or whatever else. But yeah, I’m interested. You mentioned open source algorithms. So yeah.
Matas Ubarevicius: So I will get to the open source algorithms part a little bit, but basically, yeah. So on GitHub we have a monorepo where you can take a headless indeed. So headless algorithms and use those I can share the link later to the monorepo of the web dev. So those like yeah. So editors themselves, they are the heads and the headless things. And that’s also I will address that in the topics where I will be talking, why I call bit by bit the platform. So basically that’s where I will explain a little bit how to tap into those components. Basically just npm packages are available to, to to people to use. So the like ost kernel that I integrate is available there. I hope I understood the question is that
Brandel Zachernuk: Yeah. Yeah. No. So if it’s all JavaScript based, everything actually run in client side in that, you know. Yeah. Yeah, a lot of.
Matas Ubarevicius: Yeah. It’s on client side. There’s no cloud involved in this. So web workers all the way and wasm so WebAssembly that’s how it works. Cool. Yeah.
Brandel Zachernuk: I look forward to more detail. Thanks.
Matas Ubarevicius: Sure. So, this is an example of a small pavilion for bikes. That’s the model that loads also 3D scanned bikes, my wife as well. And it’s also parametric. So here’s a bit longer script. You will see that I have a lot of components here. But I can also control for example, things like. So maybe I just need a bike shed for. Oh yeah, not a slider. Let’s see. This one. So if I want to create the bike shed for three bikes now, it will recompute. And I have now a bike shed for three bikes with just smaller amount of entities, different kind of geometry. And if I want to go crazy to 14 bikes, maybe that’s also possible. So geometry recomputes and I don’t need to as a designer, I don’t need to spend time remodeling this architectural piece or small architectural piece to, like, suit this new need of the user. If you, for example, sell products like these bike sheds, you know, you just configure it to the to suit the client needs. And this is then a bit bigger script here. You will see already a bit of this platform approach. So the same code that I had here I exported to runner. This is Fabian is a bit familiar now with runners, but. So this code was exported to Babylon.js runner, and then via Stackblitz, it was exposed as an application to run in 3D. And I’ve built a small UI here on Stackblitz where I can do again, control these things. And I can switch between the kids bike and the adults bike here. Yeah. Fabian. Sure.
Fabien Benetou: Yeah. Just a quick remark. Also like to to connect that slider, because also everything was explained briefly about being a with the runners and no build that. You can also imagine it as a VR slider. Namely you pinch on that slider, you’re in space in 3D and the the model respond to that. It’s here. It’s on the web page in 2D, of course, changing the 3D model, but you can imagine this being inside an immersive environment.
Matas Ubarevicius: Yeah, yeah. I will try to also show some videos of webXR experiences that I’ve built where I did use some graphical user interface elements. But yeah, indeed, what Fabian says is correct. Like so those inputs, they can be provided to the scripts from many different sources, be it webXR graphical user interfaces Faces or like this. In this case HTML inputs. So yeah. And in, in the context of the of the editor, it was just this number slider. Right. That was connected to it. So here you see that. Yeah it’s a bit technical, but get runner input value. I can just get it sort of from different contexts as well and use it in the script. So this is a little bit to showcase that those scripts that people create. So if they don’t know how to code, you know, it is still possible to sort of export them to JavaScript and still run it. That JavaScript itself is not very beautiful. Maybe, but at least technically that’s feasible to export those scripts to JavaScript. So I’ll go into another slide. This is again a different kind of design. So this is a table in the epicenter of it. There’s a 3D model of a palm. And for example Babylon.js really supports this technology called Gaussian Splatting. It’s quite a new thing on the web. Well not not that new anymore. In the web standards. But these are this is really the plant that I scanned with my mobile phone.
Matas Ubarevicius: I just generated 3D model and this model as well is parametric. So like how many like how high the table is, how many legs I want to have in this model I can control. So now it just recomputed to have a bit fewer of those elements. Also, if you just increase the height of the table like this, you will see that model just recomputes. It takes sometimes a bit more time. So now the table is higher and I don’t need to remodel anything. I just I mean, as a designer, you’re free to decide which parameters you want to allow your users to change, which are important or which should remain static, you know, so in this case, the palm remains static, but the table is adapting constantly. Another example. This is just to express a little bit this approach where code just helps. For example, in our. So this is a conceptual architectural piece. It’s not meant to be built or whatever. But imagine you know, you would need to design this 3D model by hand, like there are so many small details here, like those windows on the top. And if something changes, you would need to really remodel everything manually, and that would take ages. So that’s where parametric design also helps you. You know, you design a system and if something changes you change maybe some variable indeed. And the model will recompute and will re adapt.
Matas Ubarevicius: And this is just making a lot of sense in certain types of designs. Not all. Definitely not all, you know. But sometimes manual approach doesn’t really work anymore. So computers and algorithms, you see that this script isn’t too big, right? I mean, it’s yeah, it’s quite large. It’s long, but it’s not crazy big. Like it’s still you could go through those components, read it out. You will find some JSON editors here integrated. You can also, by the way, write code in the, in these components. So you can go here and just hit this TypeScript component and do some coding here. Also provide inputs even to these components. So this is a bit fun if you’re a developer to play around and many things are possible. Yeah. So I went through some examples and I will just go back a little bit to the slides to explain a little bit this aspect of the platform. Just. Okay. So for some piece of technology to be called a platform to myself, I have this idea that various levels of that stack should be available to users. So users should be able to tap into what is basically available. For example, if you just want to use Opencascade geometric kernel in your applications, you know, you should be able to do that if you want to use for example, the this whole CAD system with three.js, it should be possible if you want to use it with Babylon.js game engine, it should also be possible.
Matas Ubarevicius: So for all of these situations I’ve developed node package node packages so you can pick and choose how you combine your stack, where you use it. And if you go to npm js com, you go to bit by bit dev search for components. You will find. I mean, this is not crazy popular. You know library still, but it’s getting there and yeah. So for example, the core, you can plug it and use it either with three.js or with Babylon.js, and you will see two dependents. So these two libraries that we expose as well, they are meant for those game engines. I don’t have more like integrations, but potentially, you know, play Canvas could be added here. And then those algorithms would be available to those game engines as well. Basically with in the context of three.js, the layer which is called bit by bit depth. Three.js is just providing some helper functions to create meshes out of geometry that can kernels create. So this is how it functions a bit hard to explain, but basically you see here in this diagram that all those npm packages are just stacked on top of each other, and each layer is depending on the core layers. And at the end you have to just choose your flavor. Is it babylon.js? Is it three.js? And you can go and use it with those game engines. Yeah. Fabian. Sure. Yeah.
Fabien Benetou: Quick quick remark. Also especially for here in our context, quite a few people are relying on three.js. Yeah. And today, for example, I added a mesh generated by opencascade the kernel through bit by bit depth and that mesh, I was then able to traverse it and then change wireframe color. The things people would normally do with a three.js object 3D. So again, that’s kind of the connection for people who are not familiar with this entire stack, but they were familiar with three.js. As long as they got their mesh, their object 3D, they can do the usual things we want to do, like it’s been generated, it’s been manipulated to be generated the right way. And then you’re like, you’re back in well-known territories.
Matas Ubarevicius: Yeah. So also I’m not a PhD in mathematics, right? So I’m not like developing geometric kernels is not what I do. What I do is I, I take the kernels that someone else those PhDs develop, and I just try to make them more accessible. I do develop some algorithms on top to make them a bit easier to use. Right. So I do dig into the APIs of those kernels. I do, use them as a professional, but I also try to simplify a little, you know. So for example, it is possible in bit by bit to create opencascade geometry and then create for example, babylon.js mesh, convert that into the manifold and use manifold algorithms against that mesh. So this is a little bit technical, but without these layers that allow you to traverse the data between those geometric kernels, it’s really yeah, it’s really hard to use those as a singletons. Right. So it would be a bit of a wasteful time. And I will just go a little bit to finish this technical part. To just explain a little bit what that CAD kernel means is there’s like two universes, right? Just to sort of one is platonic universe, or that’s how I try to conceptualize it. Where CAD colonels do their things, they compute geometry. They, for example, you can say, I want to cut one sphere from another sphere, right? But during that process, you don’t really need to draw anything on the screen.
Matas Ubarevicius: You just do that somewhere in the background in the, in that CAD kernel. It computes that whole result. And then at some point when you’re finished with all your manipulations of geometry, you say in three.js, okay, Colonel, give me that mesh. Give me that object, give me that sphere. Give me the result of this operation. You know, whatever it is, is it? Maybe sometimes it’s a line, right? Maybe you’re cutting, I don’t know, surface with. With another surface. Then it will be a curve. Or if you will be doing Boolean difference operations on solids, you know, it will be another solid probably. So at some point you’re finished with all those manipulations and then you tell Babylon.js or three.js, draw me that object on the screen and that’s when you get it basically visible. So in the context of webXR, you know, that means that computations can happen the, the Meta Quest Pro or something, it can do the computations in the background. And then at some point you just draw it and make it visible on the screen. So this is a little bit how to conceptualize it. Yep. Sure.
Dene Grigar: Yeah. So when I think of the platonic universe, I think of the solids that he lays out in his, you know, philosophy. Right? In his dialogs. Yes. But when I think about the virtual world and this is what I’ve been trying to get through, the heads of my students is that we’re not bound by the platonic shapes, right? We can do other things. Yeah. So I guess my question to you, I see that you’ve got a half moon, which is not a platonic shape.
Matas Ubarevicius: To the.
Dene Grigar: Imagination. So I’m guessing that just watching what you’re talking about, we can do a whole lot more than be bound by the the squares and the triangles. Yes. Okay.
Matas Ubarevicius: Yeah. So if you would maybe notice that projects in bit by bit, they are quite like quite, quite more complex. So basically the moment you begin using the algorithms of Opencascade, for example that whole approach of thinking changes. So it’s indeed less about solids and then operations on the solids. It’s more about thinking, okay, maybe I have a really curve, for example, in three dimensions. And then I want to create a surface between two curved curves. And that is called loft operation for example.
Dene Grigar: So those two curves without I mean, the way Plato could get to curves is through many, many, many many many.
Matas Ubarevicius: Yeah. Yeah I mean yeah definitely. It’s in. Yeah it’s possible Right. So but indeed it’s a bit hard to define mathematically as well. So yeah, it’s, it’s interesting how that plays out. Yeah.
Dene Grigar: Can I ask you one more thing? There was a colleague of mine when I was I did a postdoc at the University of Plymouth in England, and one of my colleagues in the postdoc program who lives in London, by the way and is an architect, he was a 4D architecture architect. He was designing four dimensional spaces, which I thought was really quite wonderful. I’ve got a couple of his architectural drawings as works of art in my in my house, but I’m just wondering if that can also be something that can be envisioned in this space. Because I want what I’m trying. What I try to do all semester was get my students to quit thinking in normal space spatial terms, right? To get out of that. But I felt like I was beating my head against the wall.
Matas Ubarevicius: Yeah, well, webXR is interesting because, yeah, every I mean, well, I mean, it depends what’s that fourth dimension is, you know, you and I know that that’s why I mentioned I’m not a mathematician, but I know in mathematics, you know, dimension is just. Yeah, whatever. You can have many, as many as you want and and time and yeah, various kinds of things indeed. And I think when webXR this becomes maybe even more apparent, like text for example, I mean, maybe it’s not a dimension per se, but that layer of information, you know, it is needed, like when I. It is basically, as a user, you do expect some sort of feedback from as a human being, first of all, like as a machine, maybe you don’t, but as a human being, you do need some sort of textual feedback from. So even very simple stuff like if error happened, you know, like, do I see it? Do I understand it? Can I can I respond to it? What should I do with it? So this is very. Yeah. It’s relevant. I, I saw that quote. I have it here even somewhere. Yeah. On your web page. We are going to make machines intelligent. But what are you going to do for people? So this is really triggered me. It’s actually, in fact, with webXR even more relevant that, I mean, there’s a lot of hype about everything and AI and. But then is it still understandable how to understand it, how to how to make it yeah, readable and readable also for human being. Right. So just yeah, but about back about four dimensions. Yeah, definitely. I mean, interesting the game engines, you know, they have allow, first of all, a lot more than CAD programs. So for example in Rhino for example, or whatever you use from Autodesk, you know, you have buttons, you have 3D models, but usually you don’t have this possibility to interact with 3D objects. You cannot really. Yeah, I don’t know. Click on on on text or something. Fabian, you have a question? Sure.
Fabien Benetou: A quick remark. It’s indeed we the slider again that you showed in 2D on the page so we can think of doing it in 3D. We move the thing around, but it can be time. Also. I mean, there are constraints. You need a big machine. Let’s say if you want to make a complex shape in real time, that’s probably not even possible. But it can be also motion like there is a famous game I forgot the name where time was bound by your motion, so if you were static, you were not to move and no time wouldn’t flow. And if you were to move fast, time would flow fast. So we can also imagine this kind of thing here. Like some of the parameters for the design of the shapes are based on your motion or time or basically again, because it’s parametric. Yeah. All those things can become dimensioned to change the shapes.
Matas Ubarevicius: Yeah. And interactive, I mean, yeah, you can touch those objects, right? You can do many things. They can react to each other. Yeah. So this is just a small thing to just think about bit by bit to I know it’s not very easy to think about it as a system. It has many component parts and everything, but basically just these technologies, you know, that are being packaged. And then you can create 3D models, 3D websites or web shops, whatever you create on the web, and probably even more I can imagine depends how you use it. And with that, I just wanted to go into this text in webXR. I already mentioned a little that this triggered me. But yeah, it’s more about, you know, personal experience that I had using webXR. And I have few projects that also use bit by bit. So here for example this. This is a small house project that I was working on. And you see here that I’m interacting with the house like it sort of changes its shape for time being to be responsive. Then it takes time to recompute. But you see that there’s still this UI floating in there. I mean, I didn’t really have had time to experiment too much with it, but like, even though that interaction, you know, with well, natural interaction, you can click on the point, you can move it around, you can play with it. But at some point it’s still if you want precision, you do need some inputs to define your model more precisely, like maybe you will enter the number or something.
Matas Ubarevicius: I have here another example, so I’m not sure on Twitter this is a more recent one where on Meta Quest Pro. I just opened the editor and we have I have one component that sort of allows me to enter the webXR space and see the result in webXR. So here I’m coding on the browser of Meta Quest, and I’m changing this geometry in webXR I’m seeing the like thickness change. You know, it’s really interesting experiment, although I must say like having, you know, the device itself is I mean, not yet, maybe powerful. Maybe I have to work on performance as well, a bit more but yeah. Like, there are so many possibilities for designers, you know, where you still can experience and change those designs in one single space. So here, like this graphical, you know, color picker is in webXR, for example. But the rest of the script is in the browser. So this is this was really fun to code. More like a yeah, an experiment. And then last one I have here 3D bars. So this is also just allowing you to change its parameters, adapt it a little. See the change in? Yeah. Sometimes the results are crazy, but, yeah, it’s it’s all algorithmic. So you can change how many subdivisions it has. How many? As in this case. You see that I’m also allowed to change this sort of shape by doing these gumball transformations. But I activated this mode from the graphical user interface.
Matas Ubarevicius: So that’s where you know the text and what you can do with your model. It does is still needed, and it’s really without it, It’s really hard to have a more complex applications, more intuitive applications. Yeah. And with that, I think I went through most of. Oh, yeah. Last thing, just because I mentioned that I recently started this new company and yeah, we will still be doing I want to just show a small, a bigger piece of maybe more practical, one that we developed as a demo. And here we have small house. So this is like, bit by bit is controlling the shape of the house. And I’m also able to say, you know, maybe the roof should be higher, maybe the house should be wider. And you see that it recomputes and based on your size of the house. Right. The interior also changes. You see that? Now the table has only four chairs. But if I files say that the house should be bigger. Like wider. You see that? It sort of adapts, the interior space adapts. So. And everything changes. And please don’t pay any attention to the price. It’s really just for demo purposes. We don’t sell those just we will be building mostly the configurators like this. But yeah, these kinds of systems where you gamify architecture or design it’s really allowing to create a nonstandard, non-static things, which gives a lot of opportunities. And I will end with that. Yeah. If you have any questions, let me know.
Fabien Benetou: I just a question then and before opening. So first of all thanks. And I saw a lot of claps in the chat. I will make I’ll extend the demo just a bit. And I’ll just try to share the screen. It’ll be quick, so don’t worry for all the questions and discussions. Let me know if you can see. Can you see my screen?
Frode Hegland: Yes, yes.
Fabien Benetou: Okay, so I made a little heart a long time ago, which looks a bit like ice cream. It’s to also show that. Yeah, when you’re not used to it. It’s not blender. It’s not this kind of things. And it can be difficult. So I used openscad a bit. So that’s for 3D printing at home. I’ve done a couple of other versions but it’s not. It wasn’t easy. So that that one part also to try to maybe time a bit the enthusiasm. But what I showed also a bit in the, in the slack is and that’s mostly thanks of course to Matthias worth the work. And then even during the last few days. So that’s the usual demo you’ve seen in XOR a couple of times, where there are icons of different 3D objects, and that was the demo of the symposium. But now there is a kind of shelf. And what’s exciting is exactly what Matthias was saying before that the the shelf itself, basically, there are a bunch of parameters and you could add as many lines as you want. You could change the, the size of it. So I’ll try of course, like demo. So I’m not going to work but just in case.
Matas Ubarevicius: Fingers crossed.
Fabien Benetou: Yeah. So it did work. Now I have a total the whole point is that here I symbolize the room outside through this, which is also parametric, is if the room size changes the size of the of the shelf change. And then it’s not just like changing the scale, it’s this number of curves and all this. So basically thanks to Mazda’s work it’s it’s not beautiful, as all of you know. That’s not really my thing. But it works like it is arguably functional already. And that’s. Yeah, thanks to runner, thanks to all the different works being done there. So it doesn’t make me a designer or an architect. Far from it. I’m very excited though, because I’ve been trying to do this like I was showing with the Openscad and and previous attempt with manifold before for a while now like years and and now thanks to the the platform basically it’s coming together. So I’m super excited. I want to thank you again because yeah it’s it’s there and it’s beautiful
Matas Ubarevicius: It’s beautiful. Also.
Fabien Benetou: We were also discussing with this like Brendel was giving some example just like two weeks ago, I want to say. And I was like, yes, that’s exactly the kind of thing I’m into. But to me it has to remain parametric so that the different dimension and the usage do adapt for it. So yeah, now it’s already partly in that works in webXR. I mean, I’m really hyped. So I want to thank you again.
Matas Ubarevicius: No problem. I’m really happy that it worked out as well. Yeah, we chatted quite a bit last week as we always.
Fabien Benetou: Do this, but if you kept on giving great advice so I couldn’t stop.
Matas Ubarevicius: Definitely.
Frode Hegland: Yeah. Fantastic.
Brandel Zachernuk: That’s Yeah, that’s that’s really nice work. I am curious about a couple of things. First is, so I have my in my note here is what I want is diegetic hyperparameters. So the first part of that is the diocese. This idea that you have the ability to portray things in the world that that represent the parameters that you have, you had the color picker in there. You had the, the dots that that allowed you to apply it to, to to to run a lathe. Are there was that in bit by bit or was that.
Matas Ubarevicius: No. So that was already babylon.js. Yeah. Yes. So Babylon.js or three.js have the gumballs or. Yeah. Basically I subscribed to these as events indeed. And then change the geometry based on that.
Brandel Zachernuk: That makes sense. That makes sense. And do you have so, you know, one of the, one of the like you were saying, like one of the problems with geometry resolution is that it’s super slow. If you want to have the high fidelity Boolean operations or whatever else, if you want to be able to do it, you know inset skeleton to be able to, to manage a chamfer surface or whatever. So do you have any mechanism for being able to, you know, something that I do sometimes with this is I’ll I’ll draw intermediate feedback. That’s representative of the guide rails. You know, you can you can update, one or more splines at a closer to real time for the purposes of going like here’s where we’ll get to, and then kind of leave it to another process to resolve. Is there any do you have any capacity to have that kind of two step process between like here’s, here’s what sort of control level, speed control speed level kind of feedback and then it’ll resolve into those other things.
Matas Ubarevicius: Yes. So it’s twofold. So to twofold answer I guess to the same the same problem. So what you saw with the house for example, house fell back to sort of linear model basically just wireframe. And it was basically a separate algorithm for this real time speed. And then when you’re finished dragging, you know, on drag end, you just trigger the final result to recompute the final model with all the intricacies. And you can even have a third, of course button, you know, to. Really go and texture, apply textures and super realism. But like what my experience with webXR, you know, so you have really limited compute power. These algorithms are quite expensive. It can take a bit longer to compute the more complex the model, the I mean, the more longer it takes. I mean, it’s it’s not crashing, right? But I mean, yeah, it is taking its time and but both opencascade like, if you have so opencascade, for example, it, it has built in capability to mesh the b-rep representation so that that kernel works with boundary representation. And then you can choose the meshing precision. So how many triangles you want to create out of the description. And it solves it correctly to the sort of closed mesh, but like if you use less triangles, you know it will just perform better. And what what I noticed is. So yes, the computations in the kernels they take longer, but mostly the performance issues begin to surface because the mesh itself is just too expensive for graphics for GPU, basically.
Matas Ubarevicius: And that’s where the heat comes from and everything. So it’s a bit yeah, I really want to try PC VR. I must say, I didn’t I really would like to see how, you know, because on the browser, on the, on my Apple, I have Macs one or something that performs like, I can do a lot of things. I really, I know that like, for example, I don’t have that much Ram available, but still, even though it may be taking a bit longer, maybe for the algorithms to compute it’s still fine. And also I try to avoid for example, when I use Opencascade, I try to avoid boolean operations as much as much as I can. I want to. I always try to define my final shape just from faces, right? So like, I am trying to just switch my head from using boolean operations into more like, how can I define the boundary of the object, even if it’s quite complicated? Sometimes there are smart ways to go about it, and then algorithms can sometimes be really, really faster. Yeah. So there are tricks to to use to get that performance out of the system to be faster or slower. Yeah. It’s.
Brandel Zachernuk: Yeah. It’s interesting that you think that the, the majority of the issue is actually pushing the go through the GPU for render time because, you know, one another, another opportunity you might have for something that’s sort of somewhere between PC, VR and what you’ve got today is to be able to offload some of your compute to a connected machine via WebSockets. So you have the ability to just throw that workload and if there’s anything that’s computationally more expensive and, you know, like, like some skeletal skeletal decompositions and stuff like that might actually fall into that bucket, but it could be an architecture that could let you do a little more there. In practice, I don’t know how much you’ve done with real time 3D outside of this kind of sort of, sort of generally sort of CAD ish stuff. But, you know, in practice, real time graphics lens on stuff like normal maps and low poly mesh decomposition so that people have the ability to get high, high poly things like you would not put polygon chamfers on something if you if you’re not, if you it’s something that you don’t don’t super, super need for the actual silhouette. And even then people will, you know, use Lod and stuff like that. So it may be worthwhile to take a look at what, what or if or where it might be possible to jam things into normal bikes and other things like that. That’s a whole different sort of set of responsibilities where you need to then deal with moving and other things like that, but sometimes it’s worth it. But yeah, it’s it’s super interesting work. I don’t know if. Do you know Trimble creator? A friend of mine.
Matas Ubarevicius: Yeah, I know Trimble, but I, I’m not Yeah. In contact with creators. Yeah, I helped.
Brandel Zachernuk: A friend build well, helped I he did most of it. I just pointed him in the right direction for some of it. The system, it used to be called matter machine and then material, and then it was acquired by Trimble. And so now it’s the computational geometry core and SketchUp.
Matas Ubarevicius: I know, I know. Okay. Yes. Yeah, I think I on LinkedIn, I have him for material. He sent me the links. Yeah I remember. Yeah. Cool, cool. Yeah.
Brandel Zachernuk: Tom is a real fun but it’s super interesting to see because it was precisely that sort of configurator generation system that I think is really important. But yeah, like those responsiveness is as you kind of intimate with the, with your architect or your arch phase one is that like, that’s absolutely essential for people to be able to build these kinds of expressive objects of the kind as well that Danny was mentioning, like things that kind of can be responsive enough to, to, to reflect specific sort of textual components and have those pieces of information. Have you ever heard of I’ll stop talking soon, I promise. Have you ever heard of Chernoff Faces?
Matas Ubarevicius: Chernoff. Yes. No, no, I don’t think so. So so it’s.
Brandel Zachernuk: It was this insight that if you personify multivariate data sets by, for example, saying that the proportional size of eyes and the size of the mouth and the shape of the mouse mouth is, is something that conveys data, then we have a significantly higher level of legibility and capacity for being able to process it. I think it was Kirsch who said that if you get like three dimensional abstract blocks. You know, the kind that are in IQ tests for determining chirality, like, is this, you know, is this a stereoisomer of that kind of thing? You just put a head on it, then it makes it almost trivial for the human mind to process. So there’s some really interesting ways of using data transposition, particularly if it ends up being somewhat anthropomorphized, that that are really, really valuable for this. So you know, it’s out of scope, obviously, to turn a general archviz kind of thing into something for a humanoid configurator. But I think it’s really interesting to recognize that some dimensions are easier to read than others. And if you can jam complicated data into simple channels, then then you can give people a lot more literacy in terms of what they get out of it. But yeah, I’ll I’ll yield the floor.
Matas Ubarevicius: Thanks. I have it now on my Wikipedia, so I’ll be looking into that.
Fabien Benetou: A quick, quick, easy technical remark as indeed, if you don’t want to buy a desktop computer for PC, VR, cloud XR, assuming you have a good connection works quite well. Like you just store the browser, you have SteamVR and that’s it. Whatever. Whatever headset you have that will do, and you normally don’t have to to rebuild anything. That being said, I think like some of the questions Brendan were hinting at before was and I post the link before the inventing on principle from Brett Victor is how do you get fast feedback? And I think especially for us here, it’s interesting in the sense that arguably we don’t really know what we’re doing because hopefully it’s genuinely new. So it means we have a whole like what you show us, give us a whole landscape to explore. Virtual landscape of both the inside, the indoor architecture, let’s say, and outdoor architecture. And Dene was highlighting how it doesn’t have to be like the usual Cartesian one we can be. Hyperbolic. We can have other dimensions, like there is so much to explore. Which makes it even more important that the feedback is I want to say efficient, which probably implies fast and like making the right decision of what is costly to compute. What can we have, like, as widgets to, like, just explore extremely efficiently again, because how do we get there? Maybe also, how do we do the history of changes? Because when you were showing the in are some of the changes to the vase, I was thinking, oh, maybe this one is nice, but maybe I want to highlight on the slider that I have already explored this with that parameter, and that was this result. So I want to kind of now that I have a series of designs that have been available to me, how that landscape has been explored so far and where it’s still like unknowns to explore. So I think that that specific part of once you have those new possibilities, this new landscape, how do you navigate through it efficiently? I think that’s the that’s one of the most interesting questions.
Matas Ubarevicius: Maybe. Real quick before the next question, I may respond like so. Some stuff, of course, you know, is left to the programmer of the application. So for example, some things like okay, if you want to log the history of changes for example. Well you can pick those right. Every some every time some inputs changed. Maybe you just put that somewhere on the memory and retrieve it later. Maybe you want to save the design. Indeed that. Now with this for example, what I noticed if there are many parameters that configure the object it’s sometimes indeed just impossible to remember what was the configuration. So you have to somehow save it or whatnot to, to, to the precise Set up. But yeah. And in terms of speed. Yeah. Again it’s like a bit the points maybe are fastest right to represent things. But curves are fast. Surfaces become a bit slower. The more triangles there are, the slower it gets. And but yeah, it’s a bit of also trial and error. What works best for particular applications sometimes. Maybe that’s also thing.
Fabien Benetou: Sorry. It’s it’s funny how in the end, like, computational complexity partly bounds our, our ability to create like how how costly and thus how slow it’s going to be generating is it prevents us to going from some corners of that landscape and allows us to go faster in others.
Frode Hegland: So take us in a slightly.
Mark Anderson: Different in a slightly different direction. It was really prompted by sort of reflection about, you know, wanting these things available in a space. And I was watching when you were working, I think it was on the retail interface, and unsurprisingly, you were looking around for the right input box. And I was thinking, it’s one of the things we already sort of see in this group is the different, the different perspectives, say, between a programmer, a graphic designer, and someone who’s sort of literary person. They’re all looking at text, but through very, very different lenses. The text is the point of collision. So for instance and I’ll preface this by saying, I don’t think this is your job to do, but but the question came into my mind, watching it is, for instance, does things like that allow you to have more meaningfully descriptive controls? In other words, you know, use this slider if you want to make this box bigger, because depending on which axis you come into this into this interface. That might be actually the only way you understand how to use it. You may not understand the full sort of constructional part of it. You know, what you want to do.
Mark Anderson: And you know, you just want to know which which lever to wiggle. And as I say, I mean, you know, the tools are growing all the time, so it’s not it’s me asking the question isn’t to say that this isn’t here. It’s more that we’re probably getting to the part where we can consider how that aspect of annotation, of description can get built in. Because, you know, we have to do all these other things before we can even conceive about having having this meeting point. But I do think it’s it’s really interesting thinking about some of those perspectives. So there’s the computational perspective. There’s the visual esthetic perspective. So it’s the architect, the graphic designer who understands intuitive sense of space or how things look. There’s the person to whom, say, textual objects especially are more in the inner meaning that the meaning within the text itself and the fact it’s in this rendered font or that is almost immaterial because the meaning lies within it. But they might want to they might still want to do transformations on that. And therefore need the handles to do that. But anyway, I believe that reflection there.
Matas Ubarevicius: No, but you you’re completely right. So basically, you know as a developer, sometimes you forget that you have to explain something to a user. And that’s exactly what I also meant a little bit with the webXR. That text is essential. And also I sometimes leave out indeed things that I or forget that should be explained and in read context, for example. So what you described basically is a feature that is still missing a comment, right? A comment that I could add next to the component and say, okay, this is, you know, variable that will control certain thing. So definitely it’s on the list by the way. But yeah.
Mark Anderson: There’s also an interesting playback in that is thinking about once you have that sort of mapping, be able to say, well, if you’re playing with sort of this aspect of the shape, all these things that light up, these are all the things that feed into it. Because that’s interesting too. It provides a sort of quite subtle learning that steps outside having to have a formal understanding of all the geometry or the programing platform, understand it’s just another thought through it. Yeah, but thank you. I found this really interesting. Thanks very much.
Matas Ubarevicius: No, thank you for comments. Yeah.
Frode Hegland: Yeah I think you’re still.
Fabien Benetou: Up quickly on on Mark’s point. My little adventure that that path has been funny because so I’m a developer first. And I tried first to, to use the API and use the code. And honestly at initially it was too hard for a couple of reasons. First, tag that’s still new to me. Parametric design. I’ve been excited by it. I have books on generative design. I mean quite a few different books on the topic. Some like related to architecture itself, but this on generative design, which is it’s beautiful. I do recommend it, by the way. It’s gorgeous, but I haven’t really taken the time to do it. So I love the idea, but it’s been like, I’m not sure it didn’t really click to the point of actually doing it, but I’m like, oh, I’m a coder. I’m going to use the code I didn’t manage because I think the way to think about it did not click. And I use Blockly initially because that’s what we’re welcoming for beginners. It was a bit trickier still then I use the related the graph basically way and for me for discovery, that was much easier, much more efficient. But then I wanted to go back to code. And then I started to wonder, like, how does that work? So the representation, the bounded representation itself, and then opencascade the kernel itself. So going from all the way up to all the way down, and then finally back to code, because that’s what I work with. And that’s what gives me also the most freedom. But the point is I had to do a couple of back and forth at the different level based on my then understanding. So the tools themselves and the ability to discover how they work and the underlying concept behind it, it’s it’s not a linear adventure. It’s something like, oh, I’m reading the doc I implement and boom, that’s it. It’s more like back and forth with the different tools, with the different representation.
Matas Ubarevicius: I walk the same path still, so don’t don’t worry. Right. So yeah, the only thing I just want to so that you would still walk it. Right. But but at least a little bit easier than, Then. So I’m trying to sort of help out. But basically at some moment I also begin missing something and I yeah, then I start reading and yeah, it’s still a lot to I mean, Opencascade, for example, you know, I don’t expose everything that is possible. I expose a part of it. But there’s just so much in it, in it that I mean, it’s really so these things, they they develop it for 25 years already, I think maybe, if I’m not mistaken there’s a lot to unpack there. Like, I mean, not sure if I will ever unpack everything, but indeed, there’s many things to do. And also maybe just remembered. So this b-rep representation, you can save it. So for example, you can put on the screen, you can put low res, right mesh. But you could save it to a bound representation presentation file called step, and you can open it in FreeCAD or in other CAD software and work on it manually. Then you know in Fusion of Autodesk. So this makes it possible to do that. I’m supporting like I don’t support fully this step, but I mean, most of the things that you can imagine, I think that that would work out. But importing for example, big assemblies from fusion, I know that some engineers wanted that they didn’t succeed. Sadly, I don’t have full support for, you know yeah, really advanced things. So I will I will work on it further. Maybe at some point I will. But basically it’s like possible to say that, you know even if you lack resources on the browser, you could think of a way to get it into, you know, professional CAD system to finish it off, for example, or maybe you parametrically address part of your design, and the other part is just gltf that is loaded or something. And in those ways, you can begin building the app. Yeah.
Frode Hegland: Thank you. Randall, is it a quick one?
Brandel Zachernuk: It starts quick. You mentioned Gltf also USD make sure that you look at USD and how to support it. It’s it’s if you haven’t tried it in Vision.
Matas Ubarevicius: Pro, I would imagine. Look, I basically Gltf and USD so I would imagine that Babylon.js does that, like so the not really not. Yeah.
Brandel Zachernuk: No. So, so if you want, if you want to generate a USD right now, the best thing to do is actually use the Wasm build of USD lib. There is a relatively unsophisticated exporter that Mr. Dupe has written for three.js, but there are better things, and it’s worth looking into exactly what USD is, not just a file format, it’s it’s a workflow mechanism that Adobe and Autodesk. Like Adobe and Autodesk both are signatory members. That we’re going to start something called the Web Working Group. Soon I will be the chair, I believe, and it will be useful for folks to be able to actually think of USD as more of an interchange system, for being able to work across things and have things like nondestructive, collaborative workflows that are probably going to be worth your while to take a look at.
Matas Ubarevicius: So yeah, I definitely yeah, thanks for pointing me to directions. Yeah.
Frode Hegland: So that actually helps me with with my comments question issue. First of all, thank you. This is exactly what we need a deep dive into something that’s central to what we do, but we’re not all experts on to say the least, though we do have at least three serious experts in the room in addition to you. So. So here’s the thing. I’m going to Brussels on Thursday, and the idea for that was just to have a chat on icons. On the visual design, just for Fabian and me to talk about the very superficial stuff. Not project stuff. Not serious stuff. Not coding stuff. Right? And so that means I have been fully engaged with my brain. I’m basically an artist by birth and trade. Keynote things, sketching even on paper, having fun. And it turns out very quickly, and I’m sure you all come to the same conclusion that the icons are of course, important, but it’s how they are together that really matters. So that’s why it’s so wonderful today to see what you’re showing. Because just like we have AI, generative AI as an opportunity now for doing visualizations and all kinds of things, which represents a very specific set of tool. And this represents also, at least for me, a new and very specific way of looking at things. So, first of all, there’s the there’s a question I want to throw out that we started today with. But then also matters as we try to understand this world of generative and parameterized. Sorry, it’s a tough word design. I hope we won’t just see it today. I hope you will be back. You’ve opened some skulls today. Most certainly mine. I don’t know how much came into my head. Other than seriously large amounts of inspiration. So if you want a community where you can come in and there’s some back and forth dialog, I think this is definitely one of those communities. So that was yeah, it was a big thank you.
Matas Ubarevicius: To thank you for yeah, for greeting me and inviting as well. And yeah, I can imagine, you know, definitely. I can imagine that we will stay in touch with Fabian. Maybe A more in more immediate term. But yeah, if there’s something that we will think is maybe interesting to the to you guys, I’m definitely interested to join in. Very. Yeah. And I must say as well indeed, in this world where there’s so much stuff happening that sometimes. Yeah, important things get overlooked. A little and I don’t really remember when I read the website, so I opened your website and I went through through through the text, just concentrated, like I was reading the book and yeah, it sort of. Yeah. It inspired a little to to understand that. Yeah. There are people who still remember that we are human and we. Yeah. Important things are still important, right? It didn’t change because all of the great stuff and indeed generative AI. You know, I’m also thinking because I did the experiment, I throw my complete API definition to gen to to Gemini, I think. Was it I asked okay, code it, you know, code with this API a sword, sword, sword sword, like I mean, it did understand that it had to create sort of cross, you know, but it was out of boxes. So it wasn’t very great at the moment. But, I mean, it did do something who knows how it will play out. I think it’s interesting, of course, but still, this sort of designer precision is still relevant at the moment, at least.
Frode Hegland: Yes. And then there’s a question. Let’s see how long there is an interest in this question, but especially what you’ve shown us today, I think it’d be worth spending a few minutes on trying to figure out what we call a collection of knowledge in this new XR world. We’ve been toying with, as you heard in the beginning, primarily volume or map. I like volume, I think Rob introduced the notion of volume because a book is a volume, but it’s also space. So that’s beautiful. The only problem I have with it is you go if you go between different media, like if you’re on a normal 2D screen and you’re working with your knowledge things to call that a volume and when you go that doesn’t fit. So that’s why I’m wondering maybe map right. You have a 2D thing, you go into your 3D world and back and forth. So yes, Fabian, I see your hand. But Mateus, with all your dimensional geometrical thinking, what kind of language do you think fits for moving knowledge connections around.
Matas Ubarevicius: And me not being a native speaker. I will definitely try to avoid the question somehow, in a smart way. Yeah, volume is interesting. One like it’s. Yeah, I know that drives on the computer work at least called volumes as well. It’s Yeah, it’s it’s also a bit of dimensional. It’s interesting. I don’t know, but Yeah. I am not gonna go and vote on that one. I will keep it a bit silent for for myself. Yeah.
Frode Hegland: So political. All right, Fabian, let’s see.
Fabien Benetou: So volume. Of course. But why? It’s because we don’t need to worry about the the keeping the older version. Let’s say the 2D screen, it’s already there. People already familiar with it. So that’s not really my worry that people would Wonder how they would be able to use that with the old way, let’s say the flattened way of the 2D screen. What worries me again is that’s what I was trying to express earlier, is that we use the old way with the new way, or rather, that we forget how to actually genuinely use the new medium, that that’s what is actually worrying me, because the old one will really do, will really use a certain way. It doesn’t mean we should stay. We should keep on using the old medium the old way and not try novel ways within it. But I think if we keep on stay flat in vivo, in 3D, in XR, it’s not worth it. Like it just. Yeah, it’s. So that’s why the push, let’s say the insistence on without actually leaving behind the old medium, at the very least, not bringing so much weight and baggage that we we forget why we put those effort into trying something new.
Frode Hegland: So agreeing with every single word you have, you said. I disagree with the conclusion, which is great fun for discussion. So naming is of course really, really important. My concern with volume is, as I said in my little intro bit, what we’re working on here is to move knowledge around. You know, the most immediate is to move it in front of us. So we have like a spatial hypertext or a knowledge graph or memory palace. Whatever they need is whatever the size is. That’s what we’re talking about. A lot of it will be 2D because it’s simple. Hopefully we’ll get more and more 3D because it’s important and it’s richer. No question about that. But we will also really need to access this from flat displays even ten, 20, 30, 40 years in the future. There will sometimes be flat displays. So what I’m what I’m saying is my perspective is the map is not the territory is no longer true when it’s digital and a computer game, we say, what map shall we play next? You know, the word does actually mean that now, right? I’m not saying it’s the correct usage, but that’s what people say. Are there any new maps for Call of Duty or whatever? They are called maps. They’re not called levels. They’re not called places. They’re called maps. So what would be a natural language for someone who is working on a 2D display of knowledge stuff on a laptop or a computer that’s connected to XR and back and forth? I’m not saying map necessarily is the winner, but I’m saying that volume isn’t necessarily so easy, so I don’t know. What does everyone think? Danny, please.
Dene Grigar: Well, I think about what Bob Horn’s doing with his maps, right? I mean, he showed me his collection, which is pretty extensive, and he’s been doing this visualization with maps for, what, since 40 plus years? That’s not what we’re doing. And I think what I really liked about Matt’s final presentation was the house with the person moving around in it. When I think of the VR, XR environments, I think of movement as much as I do of depth, right? So I’m not thinking of width and height. I’m thinking of depth and movement and then also touch. We feel like we’re touching things even though we’re not. So when I’m playing Beat Saver, for example, I feel like I’m holding the beat sabers, not controllers, and I feel like I’m hitting the boxes because of the responsiveness we get, right? The interactivity, the the the sound helps to give the sense of haptic experience. It’s those things. And that has nothing to do with I mean, there’s really not a term for this. And I think it’s it’s really hard that maps does not make sense to me either. So I, I think the problem is, like in everything that we try to define or describe is that there’s nothing in the current world to make sense of it.
Dene Grigar: And we give these terms as kind of provisionary terms with the idea that in five years, ten years, 100 years, somebody’s going to come back and say something better about it. People in the Middle Ages didn’t call themselves living in the Middle Ages. The people in the Renaissance used the term medieval. Right? So whatever we come up with is not going to be what we end up with. But I think we can probably come up with something that that reflects depth and this kind of other kind of sensory experiences, because even those boxes that are square have the potential of interactivity, and we want to touch them. Right? Anything that’s 3D, we want to manipulate. Just one more thing is, anytime I show the beach ball in my museum to anybody, the first thing you want to do is throw it. In fact, anything that people do in a virtual environment is to throw it. There’s something human about wanting to throw things in a VR environment. Why, I don’t know. I’m sure there’s a study on it. Right. But anyway, just saying. Thanks.
Frode Hegland: That’s really beautiful, Danny. Just comment before Mark. Maybe we should spend a little bit of time. This is basically marketing, right? We’re marketing ideas. And I do agree very, very much that we don’t want to use stock ideas. I mean, the word map, you know, the etymology of that is fascinating. It’s changed many times, but it may very well have run its course. Right. Volume is very exciting. I really like that one. But when Danny highlighted, I was thinking not so much time, but interactivity. But we’ve got to find some sort of a term that we can throw around with our friends and just say, oh, when you’re working on. I mean, one of them was Knowledge Sculpture, right? It’s a bit long winded, but yeah, we’ve got to think of something. Yeah. Mark. Sorry.
Mark Anderson: Yeah. Well very quickly on the latter, perhaps the thing is to not not rush to try and define things too early and let the and let the terms actually come out from use, because sometimes I think we unwittingly rush to define things when we don’t need to. We know what they are and therefore and then we get an inclination. So the reason I put my hand up was I was thinking about Danny’s point about Bob Horne’s maps. It’s really interesting doing some deconstruction of these. One of the things it really reminded me is that that the, the sort of narrow gap he’d been pushed through because, you know, Bob’s been thinking about sort of hypertextual design for a long time. And yet the maps he produced are effectively made in illustrator and are completely flat artifacts, because I deconstructed some, and I had to actually use text recognition to get the the text out of the image to build it back into something. And that was a useful reminder because the like, the, the, the, the sort of framework of and things we’ve just been looking at today is another good provocation is to say, right, well, we’re now giving ourselves a sort of constructional process that allows us to hook back. So if we only think to construct these things like these visualizations in a way that, that things like individual pieces of text or individual blocks that are, you know, even if it’s a fixed image, but it attaches, say, to a particular idea that it’s addressable in the form of, okay, it can be used to paint on onto the front of this object, or it can be used to be the the descriptive text nested within this object. You know, that you might discover if you wanted to read about it. Just that thought. Thanks. Bye.
Frode Hegland: Really briefly before you. Randall, I’m so sorry. But, Mark, it’s kind of important because in my software author and my software is commercial. Separate from this, however, it serves as an exemplar of how to connect. That’s what we’re trying to make all these open JSON and all these fun things to go between. I have a tab called map. So I need to be able to label that in a way where someone puts on their headset and they connect with what’s in this map, which is basically a knowledge graph, right? That naming is consistent. I may very well need to say this is a map. And by the way, put on your headset, it becomes blah blah blah. So I’m not saying it has to be the same word in the same way that this defined concept in my software expert becomes a glossary. Nothing wrong with that. But because we’re starting now on year two, it’s worth with your input because you’re literally making it 3D in the fantastic sense to to go through it. So at least having a better chat about it. Mark, this is all I’m saying. Brandel, please.
Brandel Zachernuk: I am immediately hypocritical on this point of the naming because on the one hand, I’m like, I don’t think we need to decide right now. But on the other hand we urgently actually do at least you know, I mentioned I’m proposing this HTML model element, and model is an awful name, but because we already use it in the web, I don’t know how much of a web developer you are, but like the document object model, the Dom is a place where it’s already there. So so the but it’s sort of joining that sort of canon of terrifically overused words like set and and map. Both are also just have dozens of equally important, equally equally separate definitions. And so yeah, there will be some interesting sort of collisions from what ends up being kind of laid down because model or whatever else somebody gets to, to call it will be what a 3D model is referred to on the internet as hereinafter and it’ll be interesting to see whether we have other sort of, sort of intrinsically spatial kind of things that we we can use, a terminology we can use to characterize the other things that are not strictly models, but are also 3D or scenes or volumes or spaces or whatever else.
Brandel Zachernuk: And likewise for all of the subordinate ideas, like initially there was people wanted to have model get and set camera and I I put the kibosh on that. And so it’s now going to be entity transform. But those are both terrible names entity and model and transform. These are like they’re just they mean everything. So they mean almost nothing. And then environment map is another part of the API. These are, these are just like they like if you’re deep into the details of those things enough, then they they can seem descriptive, but they almost all almost at the top level mean almost exactly the same thing. So it’s an interesting one to sort of chew on for what must we do in order to get the work done? Versus what troubles are we setting ourselves up for the decades and centuries to come, when hopefully some of this technology is still to be used?
Frode Hegland: That was perfect. It makes me put in the chat here. View spec which sounds a bit ridiculous, but it is a specification for what an environment should be. Now, maybe to help us with this discussion, I now have three things in the map and I’m so excited to tell you guys. I can’t believe I managed to wait till the last six minutes of our call today. But there’s three things. One is a defined concept that is a bit of text. You do control, click define. Then you write a bit about what it is. That’s it. And that can then be connected to other things. Don’t worry, I’m not going to go into all the details. It’s just a point. Anyway, so we have defined concepts. You also have just notes. Just text on this thing does nothing. If you export your glossary, it’s not included. It’s just a bit of text turn out to be very useful as a text it to a few of you recently, then, finally. Now. And this is all Fabian’s fault. I have something called link. Right? So I looked into all kinds of things. Connections, all kinds of fancy words with Fabian and his environment. Use links. That makes sense. Three types of links. Link to a heading in the documents. Link to a local document which can be the same kind of a map. So that means you can now nest maps and then of course a web link. So the reason I’m bringing these three items up in the conversation, of course there can be other images, 3D items, all of that stuff. Right. We are talking about a knowledge space, right? I’m working on a flattened knowledge base. Together we’re working on richly dimensional knowledge spaces. Right. I’m wondering if those specific things will maybe spur something in someone’s brain. And no, we’re not going to call it second brain. So that matters. This is the kind of thing we discuss here Normally on Mondays it’s perfect.
Matas Ubarevicius: Now, the naming is the hardest thing in software engineering and beyond. I can imagine so, yeah. It’s And the thing is that you have to then live with those choices. So I get why, you know, sometimes you have to spend time to think about that.
Frode Hegland: Yeah. And I agree with you so much. And that’s why I was so grateful to talk to my mother today. It was in a, you know, she’s 89 and she doesn’t use a computer. She doesn’t text, she uses a phone. Clever woman lives in a different world. But to just talk about, you know, mom or grandma, as we call her now in the future, to do stuff on a laptop, put on the headset. Future. Meaning, you know, a few months from now, then you have all these bits related to your work on the wall or in the room of knowledge that you can move around to better understand things. Some of them are simple texts, some of them are very clever, and then you can give it to someone else. The gift of someone else is obviously crucial for an environment. So when you talk to somebody outside of our community, you know, what do you call it? I call it knowledge objects, the little things. But but what are the collections of. Right. Mark.
Mark Anderson: Just very quickly think think you on your thing about links. I mean, I think it boils down to, I mean, web links are in a sense, obviously web links, because they’re just going somewhere else and they’re playing in the web world within, within one’s own tool. I mean, links originally because they were it was all text going back. Essentially, you had a granularity. You’d have a link between essentially two objects, two nodes, for want of a better word. You could call them what you will notes, objects, whatever. And then you might, but you could in a sense have an anchor within that. So back in the days of text, that means that this word was the actual anchor, as opposed to this whole piece of text being the link. And then I think that model actually plays. Yeah. Oh, right. Is that brand of I saw I saw somebody else. So I think that is the sort of granularity you don’t need to look too much beyond it. Within an individual tool, 1st May have an interaction in the way that you make or the way that you perceive links, but they’re not different types of links. I mean, in a sense, they’re all links. And really the bit you play with is the degree of granularity within your object model as to where the where the link is is actually sort of stored or its address lies. And then obviously within one’s individual tool 1st May visualize it in a particular way, but they’re all links under the hood.
Frode Hegland: Yeah, I think that’s quite important. Mark. And we are running out of time. So I don’t mind having different names, but I think it’s for different uses of the same thing. I don’t mind that at all. And there’s nothing wrong with In My Little world. It’s a map. And then you put on your headset and suddenly it’s a volume. Nothing wrong with that. As long as the user knows. I just think every once in a while we need we need to have that little discussion. And as as I said earlier, I will say it again. Actually, I’ll say it in a different way. I started using Photoshop in the, in the 90s. Right. And it was a new kind of thing. And I was young enough then that my brain understood this new kind of thing. What you’re doing now is in the most lovely way, obvious. But the fact that you’re actually doing it is not obvious. Right? So it is. So I’m so grateful that you come in here and you’re doing a real thing, which is. I don’t know. We can find so many names to it, you know. Computational space or relative space. All of these wonderful things. I’m very, very grateful. I hope we can continue the discussion and hope that when we see you again, whenever that is. By the way, feel free to drop in any Monday, same time. Or you know, if you have five minutes, an hour or two hours, it really doesn’t matter. People come and go as they can, and then maybe in the future some of us will have more intelligent questions. Not only Randall and Fabian who are deep into this. Yeah. So? So that’s it really?
Matas Ubarevicius: Sure. No. Thank you. Thank you. Thanks a lot for inviting and I enjoyed the discussion. You guys are doing interesting things. Definitely. And yeah, I’m glad to to address at least maybe some of what you’re doing. Yeah. So I’m really looking forward to seeing also some results from Fabian, from AI. You have some really nice developers here. And ideas. And that’s two things that usually produce really nice and interesting things. So yeah, just wish you creativity and definitely I’ll keep in mind that I can jump by and be welcomed by this community, so. Yeah. Super nice.
Frode Hegland: Yeah. You’re so welcome. And in the last few seconds so much when I visualize your when I see your system, not visualize but literally see it, you know, with the nodes and graph. Of course there are other things that do this dimension and so on to see it in the way you’ve done it with basically, you know, the way you can provide APIs in and out for, for the outputs and inputs and so on. In this knowledge map volume wall thing, one of the dreams we have in the community is to have these little knowledge objects be plaintext or an entire LLM right, so you can interrogate the room of knowledge you have and say, hang on, I am an LLM representing this and that. What you have over there is rubbish, right? That’s something we have. But to have what you’re doing, to have an object in the room or a collection of objects that can understand the room and then can change a visualization based on that. Holy moly. Right. I don’t know what that means at all, but I do think it’ll be wonderful to to think along these lines. And again, I thank you. Thanks. So on that note, everyone see you all when we can. And. Yeah. Brilliant. This will go up on the YouTubes, and then there will be the Sonics, and then we will do a, we’re experimenting with new prompts for the summaries and keyword extraction and all that good stuff. So you will obviously get access to to all of this. It’ll be in our journal. All right. Bye for now.
Fabien Benetou: Thanks again. Take care. Bye bye.
Chat Log:
16:06:58 From Fabien Benetou : for ref the LED on skates and elsewhere I’m using https://www.crowdsupply.com/makerqueenau/glowstitch-leds
16:08:31 From Fabien Benetou : https://dtc-wsuv.org/electronic-literature-lab/
16:08:31 From Karl Arthur Smink : Reacted to “https://dtc-wsuv.org…” with 😆
16:08:37 From Karl Arthur Smink : Removed a 😆 reaction from “https://dtc-wsuv.org…”
16:08:39 From Karl Arthur Smink : Reacted to “XRfD.jpg” with 😆
16:09:17 From Mark Anderson : Replying to “https://dtc-wsuv.org…”
Well worth a visit if you get the chance!
16:14:02 From frodehegland To Ge Li(privately) : Glad you made it!
16:14:07 From frodehegland To Ge Li(privately) : Intros now
16:14:09 From Dene Grigar : Hi Jesse!
16:14:23 From Mark Anderson : Reacted to “Hi Jesse!” with 👋
16:14:52 From Ge Li To frodehegland(privately) : Hiii dene frode!
16:17:45 From Fabien Benetou : it’s warm in the Zoom room 😛
16:17:55 From Ge Li : Reacted to “it’s warm in the Zoo…” with 😆
16:18:07 From Fabien Benetou : (terrible weather in Belgium too)
16:22:02 From Fabien Benetou : robot Mark
16:22:11 From Fabien Benetou : (voice is a bit different than usual)
16:22:19 From Fabien Benetou : (but understandable)
16:29:01 From Karl Arthur Smink : On the subject of “what do we call the knowledge objects”:
“Map” has lots of 2d baggage, and some social definitions as well that are undesirable to be associated with.
“Volume” isn’t descriptive enough, I think. To me it says 3d, but it also says “empty space”.
What do you think about “Capsule”? Capsules are 3D, and are also have connotation as containers. Something useful is inside a capsule, but they are self contained and can be exchanged.
16:29:31 From frodehegland : Replying to “On the subject of “w…”
Hmm….
16:29:53 From frodehegland : HI Leon!
16:30:31 From Rob Swigart : As an author of books I take issue with the designation of volume as empty space. My library is filled with volumes.
16:30:46 From Mark Anderson : Replying to “On the subject of “w…”
Yes, personal classification works for personal projects but needs discussion before presuming wide adoption as personal classification can clash with others’ personal classification of the same term but differently. Food for thought.
16:30:48 From Fabien Benetou : I can help expend demo time if need be, I have sth related 😉
16:30:56 From Fabien Benetou : Reacted to “As an author of bo…” with 👌
16:30:59 From frodehegland : Reacted to “I can help expend de…” with 👍
16:31:09 From Mark Anderson : Reacted to “As an author of book…” with 😂
16:31:17 From Karl Arthur Smink : Replying to “As an author of book…”
Ah. Another trouble, it seems, in using words with multiple definitions.
16:31:42 From Fabien Benetou : for ref https://en.wikipedia.org/wiki/Rhinoceros_3D
16:32:20 From Ge Li : Reacted to “I can help expend de…” with 🤩
16:34:26 From Brandel Zachernuk : Super curious about the guts of this! If it’s backed by CGAL or a headless Houdini etc
16:35:53 From Fabien Benetou : https://en.wikipedia.org/wiki/Grasshopper_3D
16:36:09 From Fabien Benetou : Replying to “Super curious abou…”
there are few kernels, e.g. Manifold and OpenCascade
16:36:48 From Dene Grigar : this is the kind of object I want to see in our project
16:36:56 From frodehegland : Reacted to “this is the kind of …” with 🔥
16:37:05 From frodehegland : Replying to “this is the kind of …”
Me too!
16:37:05 From Dene Grigar : I’ve love to see “text” reflected in this way
16:37:27 From frodehegland : Reacted to “I’ve love to see “te…” with ❤️
16:37:30 From Dene Grigar : this is what I mean by 3D, not simply the space but the objects as well
16:38:05 From Fabien Benetou : Replying to “I’ve love to see …”
you’ll see soon 😉
16:38:49 From Fabien Benetou : it’s already much improved 😀
16:39:41 From Fabien Benetou : Brandel: addressing directly the remark on the Monday meeting after the symposium
16:40:11 From Fabien Benetou : (but without a loop back and forth from/to Blender)
16:43:47 From Fabien Benetou : much better to discover and tinker with than with code to begin with, children and otherwise
16:45:17 From Fabien Benetou : BRep
16:45:42 From Fabien Benetou : https://github.com/bitbybit-dev/bitbybit
16:46:33 From Fabien Benetou : and also no build
16:46:57 From Fabien Benetou : cf https://bitbybit.dev/blog/updated-bitbybit-runners
16:47:17 From Fabien Benetou : Replying to “BRep”
16:49:46 From Fabien Benetou : or bounding space size itself
16:50:30 From Fabien Benetou : Replying to “this is what I mea…”
me too, tried for a while now, e.g https://x.com/utopiah/status/1765269414021075226 (and I have older ones) but hasn’t been easy, until now! Hence the excitment I wanted to share 😀
16:51:46 From Fabien Benetou : Replying to “this is what I mea…”
and some works based on data https://x.com/utopiah/status/1223996127747485697 but still “just” working then with primitives, not parametric design unlike here
16:55:19 From Fabien Benetou : on kernels :
but IMHO onboarding is much easier with BitByBit RETE editor (graph) or the Blockly version, for discovery of the primitivers and tranforms at least.
16:56:54 From Mark Anderson : Reacted to “on kernels :
https…” with 👍
17:04:59 From frodehegland : Super hot
17:05:02 From Brandel Zachernuk : Superhot and braid
17:05:06 From frodehegland : right
17:05:22 From Fabien Benetou : yes ,thanks
17:06:54 From Fabien Benetou : you could pinch on the side to pull on that axis
17:09:34 From Fabien Benetou : I can do a very quick demo before opening to discussion
17:15:01 From Mark Anderson : Reacted to “Screenshot_20241209_174209.png” with 👍
17:15:32 From Fabien Benetou : (got this working litterally minutes before the presentation)
17:15:39 From frodehegland : Reacted to “(got this working li…” with 🔥
17:17:00 From Fabien Benetou : wanted to hilight that it’s not easy, just MUCH easier, to the point that not only Matas can do it, again that’s what exciting for me, potentially opening a wide landscape to explore for us
17:18:43 From Mark Anderson : Reacted to “wanted to hilight th…” with 👏
17:18:51 From Fabien Benetou : also I imagine a lot of you are familiar with https://www.zaha-hadid.com for a bit more inspiration, not “just” cubes
17:19:29 From Fabien Benetou : Matas: CloudXR works, might be worth trying this way without buying
17:20:06 From frodehegland : Reacted to “also I imagine a lot…” with ❤️
17:20:46 From Brandel Zachernuk : A friend builds http://creator.trimble.com/ (for SketchUp and the rest of the BIM stuff they make over there)
17:20:56 From Fabien Benetou : good excuse to plug, as nearly a monthly basis https://archive.org/details/vimeo-36579366 Bret Victor for Inventing on Principle
17:21:17 From frodehegland : https://www.zaha-hadid.com/architecture/beijing-new-airport-terminal-building/ on Zaha. Saw this earlier in the year, very non plain lines
17:22:21 From Mark Anderson : https://en.wikipedia.org/wiki/Chernoff_face
17:24:08 From Brandel Zachernuk : https://www.youtube.com/c/MatterMachine before acquisition
17:24:27 From Brandel Zachernuk : (My friend’s computational geometry tool for configurator generation)
17:24:37 From Matas Ubarevicius : Reacted to “(My friend’s computa…” with ❤️
17:27:52 From Fabien Benetou : on https://en.wikipedia.org/wiki/Computational_complexity and the intersection of tooling and creativity
17:29:59 From Fabien Benetou To frodehegland(privately) : I have sth on Mark’s point
17:30:08 From Karl Arthur Smink : Abstraction in tooling is a really sticky subject, yeah. The computer graphics class I was TA’s for this semester is having an identity crisis right now, where the instructors want to teach the low level fundamentals, but students have no practical use for knowledge of how to program basic ray tracers
17:32:07 From Karl Arthur Smink : BitByBit is a cool tool. Reminds me a lot of Unreal Engine’s Blueprint visual scripting language, but also Houdini’s parametric modelling tool. The cool part is it seems inherently built or optimized for web, which the other two can build for web, but it’s not their primary focus
17:33:00 From Leon van Kammen : really cool presentation (and great to see everybody again).
I have to jump now, so thanks and see you next time!
17:33:05 From Mark Anderson : I read ‘Generative Design’, loved the ideas .. got nowhere. 😀 doesn’t put me off, but, ‘obvious’, as in ‘what’ next’ is not intuitive.
17:33:27 From Fabien Benetou : Reacted to “really cool presen…” with 👌
17:33:30 From Mark Anderson : Reacted to “BitByBit is a cool t…” with 👍
17:34:20 From frodehegland : My hand keeps going down! Must think we are all using the same voice! Ha!
17:35:45 From Mark Anderson : Another in-point to these sort of tools is the input data. As it, ‘if I want to make [thing] like on screen, what info in what form is needed. I might have the right info, but in the wrong form not understand where to feed the data into the tool/system. Not a fault of system designs, simply a reminder about the non-obviousness of things.
17:36:04 From Mark Anderson : Reacted to “My hand keeps going …” with 😮
17:36:10 From Ge Li To frodehegland(privately) : Frodeee, I have to run to a ML project meeting🏃(5 min late!)
Very cool project today and very technical hhhh
17:36:32 From frodehegland To Ge Li(privately) : Reacted to “Frodeee, I have to r…” with 🔥
17:36:45 From Fabien Benetou : Replying to “I read ‘Generativ…”
http://www.generative-gestaltung.de/1-archive/index.html
17:36:53 From Mark Anderson : Replying to “My hand keeps going …”
I guess Zoom thinks our virtual arms get tired, so lowers them to be more human.
17:38:34 From Fabien Benetou : (what the heck cultural hegemony is this… Zoom has French, US, Japanese, etc flags but NOT Belgium, very disappointing)
17:38:49 From Mark Anderson : Reacted to “(what the heck cultu…” with 🙄
17:38:50 From Karl Arthur Smink : My old PI used to call it “Drinking from a fire hose”
17:38:56 From Mark Anderson : Replying to “(what the heck cultu…”
Indeed.
17:39:58 From Brandel Zachernuk : On important historical milestones, this guy (who seems like he knows) says that it’s the 50th of Computer Lib: https://x.com/JimmyRis/status/1865261268153889125
17:40:25 From frodehegland : 56th today on Doug’s demo!
17:42:22 From Karl Arthur Smink : “Graph” or “Network” also come to mind
17:43:42 From Dene Grigar : No flat
17:43:49 From Karl Arthur Smink : The constant tug-of-war between familiar affordances and overcoming the inefficiencies of translating them into a foreign context
17:45:05 From Fabien Benetou : FWIW I believe also developers think about map as https://en.wikipedia.org/wiki/Map_(mathematics) and https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map which exists in a lot of languages
17:45:31 From Fabien Benetou : window to the 3D volume
17:45:35 From Mark Anderson : Replying to “On important histori…”
Correct, I think. CompLib first came out in ’74 (self-published). The ’87 reprint (via Microsfot/Tempus imprint) updated the book and dropped some very early PC stuff already obsolete. The ’74 version is available in facsimile digital reprint at https://computerlibbook.com/collections/computer-lib-dream-machines/products/computer-lib-dream-machines.
17:47:25 From Karl Arthur Smink : those dang historians, just relabeling everything
17:47:34 From frodehegland : Reacted to “those dang historian…” with 😂
17:47:50 From Fabien Benetou : yes, during the demo at the symposium too
17:47:57 From Fabien Benetou : people wanted to throw knowledge, icons
17:48:14 From Mark Anderson : Reacted to “those dang historian…” with 🙄
17:48:40 From Karl Arthur Smink : I still like “Capsule”
17:48:57 From frodehegland : I have ‘Map
17:48:58 From Dene Grigar : I need to get to my lab’s meeting. Thanks, Matas!
17:49:02 From Dene Grigar : Bye, folks!
17:49:07 From Karl Arthur Smink : Reacted to “Bye, folks!” with 👋
17:49:08 From Matas Ubarevicius : Reacted to “https://www.zaha-had…” with ❤️
17:49:25 From frodehegland : We kinda do need to define. I have a tab in Author called ‘Map’ and I want it to connect to the XR world…
17:49:44 From Matas Ubarevicius : Reacted to “also I imagine a lot…” with ❤️
17:50:17 From Matas Ubarevicius : Reacted to “(got this working li…” with 👍
17:50:20 From Fabien Benetou : https://en.wikipedia.org/wiki/Operational_definition
17:52:06 From Karl Arthur Smink : “Paradigm” ?
17:53:04 From frodehegland : Viewspec
17:53:48 From Fabien Benetou : mereology and naming, easy!
17:53:49 From Karl Arthur Smink : Yeah Unreal calls them Maps, Unity calls them Scenes, but they’re both still “just files”
17:54:11 From Matas Ubarevicius : Reacted to “on kernels :
https…” with 👍
17:54:19 From Brandel Zachernuk : This is the high-level description and API: https://github.com/immersive-web/model-element/blob/main/explainer.md
17:54:21 From Mark Anderson : Perhaps a middle ground is to offer our (personal) meaning as a meaning attached to a temporary terms. IOW, “this is what I think the thing I’m calling X is, but I’m not invested in X as a name. The description/PoV is the important part and can thus be mapped/ported not a different model/name.
17:54:49 From Karl Arthur Smink : Reacted to “Perhaps a middle gro…” with 👍
17:56:31 From Fabien Benetou : on the topic https://newsletter.squishy.computer/p/where-to-draw-the-line was a good read
17:56:49 From Karl Arthur Smink : //TODO: use descriptive variable names
var myVar = new Var{};
17:57:11 From Rob Swigart : knowledge objects – knobjects
17:57:19 From Karl Arthur Smink : Reacted to “knowledge objects – …” with 😆
17:57:27 From frodehegland : Reacted to “knowledge objects – …” with 😂
17:57:30 From Andrew Thompson : I need to head out to my next meeting. Interesting presentation and discussions today!
17:57:35 From Fabien Benetou : Replying to “//TODO: use descri…”
I name everything DUCK until they start to do, or be used as, something and then I rename them accordingly. /s
17:57:41 From frodehegland : Reacted to “I need to head out t…” with 👍
17:58:00 From Karl Arthur Smink : Reacted to “I name everything DU…” with 🦆
17:58:16 From Fabien Benetou : Reacted to “I name everything …” with 🦆
18:00:08 From Mark Anderson : Replying to “knowledge objects – …”
In BrE the stress is normally on the first syllable of word 🤔
18:01:28 From Mark Anderson : Reacted to “I name everything DU…” with 🦢
1 comment