Tr : 25 Feb 2022


Chat Log:

Gavin Menichini: My name is Gavin, I work here, at Immersed. I’ve been here for about two years. And I, essentially, lead all of our revenue and business development operations here. Worked closely with their founder and CEO on basically everything in the company. I was the seventh hire and considered, along with our founding team, as a mini co-founder. So I’m very familiar with the Immersed platform, and then the mission of what we’re building in the VR, AR space, as well as metaverse crypto as well, which we can talk on later. Yeah, today I’m going to be talking about Immersed, so I can have a quick presentation discussing what we’re doing. But also would love, I guess, some more feedback too of, open candid conversation, I know you have some questions you want to ask about this idea of working in the metaverse, going from 2D to VR, AR, the implications it could have. So, to be working in the VR space, very familiar with the technology and the implications of what we think is coming. I work closely very directly with Meta, HTC, Microsoft, as well as Future Apple. So pretty engrained in the market, and understanding what’s coming. So, yeah, if we can get a quick presentation on Immersed but maybe just go around and do a quick round of introductions and it’d be helpful for me. 
Frode Hegland: Sure, okay. I’ll start. This group is… We’ve had over 100 of these meetings now. Every Monday and Friday. It started as an outgrowth of the annual symposium on The Future of Text, we’ve got two books published. We’ve now started a journal. The journal will be collated into a book, at the end of the year. We are passionate about text. And currently we are really living in VR land. The guys will introduce themselves in a minute. But from my perspective, what I’m really strongly focused on is the lack of imagination around this. We’re focused on work in VR, and I’m really, to put it plainly, shit scared that in a year or two, with Apple and other advanced devices out there, people will think that it’s basically a meeting room, and a game, and that’s about it. So we’re trying to look at ways of making, working collectively, and individually in VR, something else. And we’re doing it just as an open group of people, doing demos, and yeah, figuring it out together. Okay, who’s next?
Elliot Siegel: Well, I’m Elliot Siegel. And the repository that Frode mentioned was in the National Library of Medicine. I was an executive with NLM for 35 years. I retired in 2010 and continued working with them as a consultant, and I still have involvement with them. And so, I basically was interested in the kind of matchmaking and arrangement with then Frode and Vint Cerf and in my old organization. I must confess I feel very intimidated. I’m obviously several generations apart from you guys. And it’s quite impressive. And my son, who’s in his early 40s, he is an Oculus user, and he said, “Dad, you’ve got to try this out”. And I’m thinking, “What the hell am I going to do with it? I’m not a gamer.” And so I’m looking for… They want to give me an 80th birthday present, this might be it. But I first have to find out whether there’s a use of it for me, beyond playing games. I did experiment with Second Life, by the way. That was probably before you guys were born. And I did bring that into NLM. We did some work applications with that. So I do have an interest in VR, and so I’m here to learn. I’ve got a phone call coming in. At any time I may have to get off at that point. But I’ll listen for as long as I can. Thank you.
Brandel Zachernuk: Welcome. It’s exciting to have somebody with deep domain knowledge and awareness of what kind of problems need to be solved in such an important and serious space, as the NLM. My name is Brandel. I am the person who set the cat amongst the pigeons somewhat within The Future of Text book. I’m a creative technologist, working for a big tec in silicon valley. But very much here in a personal capacity. Something that I’ve been very passionate about, trying to investigate, and play with over the last 10 years or so is, what is the most pedestrian thing that you can do with virtual reality and emerging technology? And y version of that was word processing, was writing and reading and thinking about what are the basic building blocks of that process of writing and reading that can fundamentally be changed by virtual reality. Realizing that if you don’t have a screen, you have the ability for information to mean what it means for your purposes rather than for the technical limitations that apply as a consequence of a mouse or keyboard or things like that. And also, deeply invested in understanding some of the emerging cognitive science, and neurophysiological sort of views about what it is that the mind is and the way that we work best. So reading about, and learning about, what people call 4E Cognition that’s embedded, embodied, and active in the extended mind. And how that might pertain to what we should be doing with software and systems, as well as hardware, if necessary, to make it so that we can think properly, and express properly, and stuff. So, that’s why I’m here. And that’s what I’ve been playing with, and I’m looking forward to seeing what you’ve got.
Frode Hegland: Brandel, I finally have a bio for you now. That was wonderful!
Fabien Benetou: I’ll jump in because I’m here thanks to Brandel. And I think we have a, somewhat, similar profile. I’m a prototypist. I (indistinct) 15 years ago, in any kind of substrate and transition to Wiki. And now I’m basically bringing that Wiki to VR and AR. And I say this because there are a lot of people getting excited by VR, but few who think not games, to paraphrase Elliot, is interesting and I think it goes a lot further than this. This looks a bit like a city and it’s basically a representation of part of my mind through my notes at the 3D space that can be navigated in VR. To be fair, I don’t know what I’m doing, literally, I really don’t know how to do this. So I’m just tinkering, building prototype, here sharing very candidly what I build with everyone, because I believe I’m quite eager to hear the opinion, the criticism. And I’m genuinely convinced, as Brandel highlighted, that the neurological aspect of how we move in space can greatly benefit from the new medium. But like I said, very candid about it. I can tinker, I can code, but I don’t know how to do this. 
Frode Hegland: You do know what you’re doing. But that’s another discussion right there. Alan?
Alan Laidlaw: Yes, hello. I’m Alan. I work at Twilio, which is an SMS and a sort of multi-channel company. And I work there because of my interest in the various forms of text and communication. Twilio is unique in that, and it’s probably the closest to a ubiquitous governance model in that. Most of my job switches between technology and policy. The interest, the why I’m here, and how I think it dovetails nicely with VR is, as Brandel said, embodied, enacted cognition. When is text, the most natural way to communicate, versus other visual forms, or interactive forms. I think there’s a lot of potential there. At the same time, we’re in the middle of a shared experience, where we can’t even seem to agree on the same definitions of words. Or get excited almost about technology without looking at the base misunderstandings at the word level. So, that’s why I’m here.
Frode Hegland: Great. So, Gavin, did you want to do a screen share? You’re welcome to do that, or talk, or however. But before you do that, just a quick check everybody here knows what Immersed is. And you’ve at least tried it or something similar, right?
Alan Laidlaw: I know what it is, but I’d love a general description for the recording.
Gavin Menichini: Yep. More than happy to talk through it. And so it seems that each of you are VR users to some extent. Own a Quest 2 or an HTC device or something. Am I understanding that correctly or is anyone here never used VR before?
Frode Hegland: Bob, have you used VR recently? Within the last few years? You haven’t? Right. But Bob has a long history making information, generation murals, and Brandel has been working on putting those into VR. So, that’s the perspective and wisdom of Bob, who’s not in VR yet, but he will be soon. So, awesome. And you’re in, Gavin.
Gavin Menichini: Awesome, of course. Well, thanks everyone for the introduction. It’s an honour to be here and chat with each of you about Immersed. So, what I think I’d like to do with our time is, I can give a high-level description of Immersed. I actually like to also show you a video to help encompass what the experience looks like. And so, for each of you who have used Immersed, and are very familiar. For those of you who haven’t checked out Immersed, I think the video that our marketing team put together is very helpful, just at the high level. And then, I can just walk through a basic slide deck that I like to show companies, and showcase the value a little bit. Some more on the sell-side, I assure you this is not a sales pitch, but I think it should be helpful to showcase some of the value that we have. 
Frode Hegland: A little bit of an intro is nice. But consider that the experience is quite deep, in general. And also because it will be recorded. If you can do, kind of, a compressed intro, and then we go into questions and deeper, that would be really great.



Gavin Menichini:  Immersed is a virtual reality product, working productivity software, where we make virtual offices. And so, what that means is, Immersed is broken down into two categories, in my opinion. We have a solo use case, and we have a collaboration meeting use case. So, the main feature that we have in Immersed is the ability to bring your computer screen, whether you have a Mac, a PC, or Linux, into virtual reality. So, whatever is on your computer screen is now brought to Immersed. And we’ve created our own proprietary technology to virtualize extensions of your screen. Very similar to, if you had a laptop or computer at your desk, and you plugged in extra, physical monitors, from our screen real estate. We’ve now virtualized that technology. It’s proprietary to us. And we’re the only ones in the world who can do that. And then, now at Immersed, instead of you working on one screen, for example, I use the MacBook Pro for work, so instead of me working on one MacBook Pro, with an Oculus Quest 2 headset, or a compatible headset, I can connect it to my computer, have a Immersed software on my computer, in my headset, bring my screen into virtual reality, have the ability to maximize it to the size of an iMac screen. I can shrink it and then create up to five virtual monitors around me for a much more immersive work experience for your 2D screens. And you can also have your own customized avatar that looks like you, and you can beam into all these cool environments that we’ve created. Think of them as higher fidelity, higher quality video game atmospheres. But not like a game, more like a professional environment. But we also have some fun gaming environments, or space station offices, or a space orbitarium, auditorium. We have something called alpine chalet, like a really beautiful ski lodge. Really, the creativity is endless. And so, within all of our environments, you can work there, and you can also meet and collaborate with people as other avatars, instead of us meeting here on zoom, where we’re having a 2D, very disconnected experience. I’m sure each of you probably heard the term Zoom fatigue or video conference fatigue? That’s been very real, especially with the COVID pandemic. And so, fortunately, that’s hopefully going away, and we can have a little bit more in-office interactions. But we believe Immersed is the perfect solution for hybrid and remote working. It’s the best tech bridge for recreating that sense of connection with people. And that sense of connection has been very valuable for a lot of organizations that we’re working with, as well as enhancing the collaboration experience from our monitor tech, and our screen sharing, screen streaming technology. So, people use it for the value, and the value that people get out of it is that, people find themselves more productive when working in Immersed, because now, they want to have more screen real estate, like all the environment we’ve been potentially created, to help preach cognitive focus. So, I have lots of news for customers and users who tell us that when they’re Immersed. They feel hyper-focused. More productive. In a state of deep workflow, whatever term you want to use. And people are progressing through the work faster, and feel less distracted. And then, just also, generally more connected, because when you’re in VR, it really feels like you have a sense of presence when you’re sitting across from a table from another avatar that is your friend or colleague. And that really boosts employee and person satisfaction, connection, just for an overall engaging, better collaborative experience when working remotely. Any questions around what I explained, or what Immersed is?


Fabien Benetou: Super lovely. When you say screen sharing, for example, here I’m using Linux. Is it compatible with Linux? Or is it just Windows or macOS? Is it web-based?
Gavin Menichini: So, it is compatible with Linux. And so, right now, you can have virtual monitors through a special extension that we’ve created. We’re still working on developing the virtual display tech to the degree we have for Mac and Windows. Statistics says that Linux is only one of two percent of our user base. And so, for us, as a business, we obviously have to optimize for most of our users. Since we’re a venture-backed startup. But that’s coming in the future. And then, you can also share screens with Linux. And so, with some of the extensions, you can use it for having multiple Linux displays, you can share those screens, as well, within Immersed.
Alan Laidlaw: That’s great. Yeah, this is really impressive. This is a question that may be more of a theme to get into later. But I definitely see the philosophy of starting with, where work is happening now, and like the way that you make train tracks, bringing bits and pieces into VR so that you can get bodies in there. I’m curious as to, once that’s happened or once you feel like you’ve got that sufficiently covered, is there a next step? What would you want the collaborative space in VR to look like that is unlike anything that we have in the real world, versus… Yeah, I’d love to know where you stand philosophically on that, as well, as whatever the roadmap is?
Gavin Menichini: Sure. If I’m understanding your question properly, it’s how do we feel about how we see the evolution of VR collaboration, versus in-person collaboration? If we see there’s going to be an inherent benefit to VR collaboration as we progress, versus in person?
Alan Laidlaw: Yeah, there’s that part. And there’s also, the kind of, is the main focus of the company to replicate and provide the affordances that we currently have, but in VR? Or is the main focus, now that you know once we’ve ported things into a VR space, let’s explore what VR can do?
Gavin Menichini: Okay. So, it’s a little bit of both. It’s mostly just, we want to take what’s possible for in-person collaboration and bring it into VR, because we see a future of hybrid remote working. And so, COVID, obviously, accelerated this dynamic. So, Renji, our founder, started the company in 2017, knowing, believing that hybrid remote work was gonna become more and more possible as the internet and all things Web 2.0 became more prevalent. And we have technology tools where you don’t have to drive into an office every single day to accomplish work and be productive. But we found that the major challenges were, people aren’t as connected. The collaboration experience isn’t the same as being in person. So those are huge challenges for companies, in a sense of a decrease in productivity. So, all these are major challenges to solve. And those are the challenges that Renji set out to go build and fix with Immersed. So when we think about the future, we see Immersed as the best tech bridge, or tool for hybrid or remote working. Where you can maximize that sense of connection that you have in person, by having customizable avatars, where fidelity and quality will increase over time, giving you the tech tools through multiple monitors and solo work. Enhancing the solo work experience. So people become more productive, which is the end goal of giving them more time back in the day. And then also, corporations can continue to progress, as well, in their business goals, while balancing that with giving employees more time back of their day to find that beautiful balance. And so, we see it as a tech bridge, but we, as a VR company, we’re also are exploring the potentials of VR. Is there something that we haven’t tapped into yet that could be extremely valuable for all of our customers and users to add more value to their life and make their life better? So, it’s less so that, it’s more so we want to virtualize, make the hybrid remote collaboration, work experience, much more full, better value, with more value than it currently exists today with the Zoom, Slack, Microsoft Teams paradigm. 
Brandel Zachernuk-
Brandel Zachernuk: Yeah, I’m curious. It sounds like, primarily, or entirely, what you’ve built is the the connective tissue between the traditional 2D APPs that people are using within their computer space, and being able to create multi-panels, that people are interacting with that content on. Is that primarily through traditional input? Mouse, keyboard, trackpad? Or is this something where they’re interacting with those 2D APPs through some of the more spatial modalities that are offered hands or controllers? Do you use hands or is it all entirely controller-based?
Gavin Menichini: Yeah, great question. So, the answer is, our largest user base is on the Oculus Quest 2. It’s definitely the strongest headset, bang for your buck on the market for now. There’s no question. But, right now, you can control your VR dynamics with the controllers or with hand tracking. We actually suggest people use hand tracking, because it’s easier, once you get used to it. One of the challenges we face right now is, there is an inherent learning curve for people learning how to interact with VR paradigms. And, as me being on a revenue side, I have to demonstrate Immersed to a lot of different companies and organizations, and so it can be challenging. At some point, I imagine it would be very similar. And I was born in 95, and so I wasn’t around these times. But I imagine it feels like demoing email to someone for the first time, on a computer, and they’ve never seen a computer, where they totally understand the concept of email. No more paper memos, no more post-it notes. Paper organization and file cabinets, all exist in the computer, and they get it. But, when I put a computer in front of them for the first time, they don’t know how to use it. What’s this track? They had the keyboard, the mouse, they don’t understand the UI, UX of the Oculus, the OS system. They don’t understand how to use that, so it’s intimidating. So, that’s the challenge we come across. And then, that answers your point with your first question, Brandel?
Brandel Zachernuk: Yeah, I’ve got some follow-ups, but I’ll cede the floor to Frode.
Frode Hegland: Okay. I’m kind of on that point. So, I have been using Immersed for a bit. And the negatives, to take that first, is that I think the onboarding really needs help. It’s nice when you get that person standing to your side and pointing out things, but then… So, the way it works is, the hand tracking is really good. That is what I use. I use my normal keyboard, physical keyboard on my Mac, and then I have the monitor. But it’s, to me, a little too easy to go in and out of the mode where my hands change the position and size of the monitor. You’re supposed to do a special hand thing to lock your hands to not be doing that. And so there’s pinning. So, when you’re talking about these onboarding issues, that’s still a lot of work. And that’s not a complaint about your company. That’s a complaint across the board. The surprise is also, it really is very pleasant. I mean, here, in this group, we talk about you know many kinds of interactions, but what I would like, in addition to making it more locked, to make the pinning easier. I do find that, sometimes, it doesn’t want to go exactly where I want. I’m a very visual person, kind of anal in that way, to use that language. I want it straight ahead of me, but very often it’s a little off. So, if I resize it this way, then it kind of follows. So, in other words, I’m so glad that you are working on these actual realities, boots on the ground thing, rather than just hypotheticals. Because it shows how difficult it is. You get this little control thing on your wrist, if there was one that says “hyper control mode”, different levels. Anyway, just observation, and question, and point.
Gavin Menichini: Yeah. I can assure you that we obsess over these things internally. Our developers are extremely passionate about what we’re building. We have a very strong XR team. And our founder is very proud about how hard it is to get to our company, and how many people we reject. So, we really are hiring the best talent in the world, and I’ve seen this first-hand, getting to work with them. And we also have a very strong UI, UX team. But we’re really on the frontier of, this has never been done before. And we are pioneering. What does it mean to have excellent UI, UX paradigms and user onboarding paradigms in virtual reality? And one of the challenges we face is that, it’s still early. And so people are still trying to figure out, even foundations for what is good UI, UX. And we’re now introducing space, like spatial computing. And we’re going from 2D interfaces to 3D. What have we learned from good UI, UX or 2D translate to 3D, and paradigms of this? And people are now not just using a controller and mouse, they’re using hand tracking and spatial awareness. And how do we build good, not only do we understand what’s a good practice for having good paradigms in UI, UX, how do we code that well? And how do we build a good product around that, while also having dependencies on Oculus, HTC, and Apple? Where we’re dependent upon hardware technology to support our software. So we still live very much in the early days, where there’s a lot of tension of things are still being figured out. Which is why we’re a frontier tech. Which is why it takes time to build. But even with VR, AR, I think, it’s just going to take longer because there are so many more factors to consider. The people who pioneered 2D technology, Apple, Microsoft, etc, they didn’t have to consider. And so, I think the problem we’re solving candidly is exponentially harder than the problem they had to solve. But we also get to stand on their shoulders, and take some precedence that they built for us, and apply that to VR, where it makes sense.
Brandel Zachernuk: So, in terms of those new modalities. In terms of the interaction paradigms that seem to make the most sense, it sounds like you’re not building software that people use, as much as you’re using making software that people reach through to their other software with, at this point. Is that correct? You’re not making a word processor, you’re making the app that lets people see that word process. Which is a big problem. I’m not minimizing it. My question is:
Do you have observations based on what people are using the way that they’re changing, for example, the size of their windows, the kinds of ways that they’re interacting with it? Do you have either observations about what customers are doing as a result of making the transition into effective productivity there? Or do you have any specific recommendations about things that they should avoid or reconsider given the differences in, for example, pixel density, or the angular fidelity of hand tracking within 3D, in comparison to the fidelity of being able to move around a physical mouse and keyboard? Given that those things are so much more precise. But also, much more limited in terms of the real estate that they have the ability to cover. Do you have any observations about what people do? Or even better, any recommendations that you make to clients about what they should be doing as a result of moving into the new medium?
Gavin Menichini: Yeah, really good question. There are a few things. There’s a lot of things we could suggest. So, a lot of what we’re building is still very exploratory, of what’s the best paradigm for these things? And so, we’ve learned a lot of things, but we also understand there’s a lot more for us to build internally and explore. First and foremost, we definitely do not take, hopefully, this is obvious, but to address it, we definitely do not take a dystopian view of VR, AR. We don’t want people living in the headset. We don’t want people strapped it to their face extremities, like a feeding tube and water, etc. That’s not the future we want. We actually see VR, AR as a productivity enhancer, so people can spend less time working, because they’re getting more done in our products, because we’ve created a product so good that allows them to be more productive, so they get more done at work, but also, have more time to themselves. So, we suggest people take breaks, we don’t want you in a headset for eight hours straight. The same way no person would suggest for you to sit in front of your computer, and not stand, use the restroom, eat lunch, go on a walk or take a break. We could take the same paradigms. Because you can get so focused on Immersed, we also encourage our users to like, “Yeah, get stuff done, but take a break”. But then we’re also thinking through some of the observations we found. We’ve been surprised at how focused people have been. And the onboarding challenge is a big challenge, as Frode was mentioning. It’s one that we think about often. How do we make the onboarding experience better? And we’ve made progressions based on where we came from in the past. So, Frode, you’re seeing some of the first iterations of our onboarding experience, in the past, we didn’t have one. There’s something we actually pushed really hard for. We saw a lot of challenges of users sticking around because we didn’t have one. And we’re now continuing to push how do we make this easier. Explain things to people without making it too long, where people get uninterested and leave. It’s a really hard problem to solve. But we found, as we’re having an easier onboarding experience, helping people get used to the paradigms of working in VR and AR, and explaining how our technology works, and letting them get to, what we like to call this magic moment, of where they can see the potential of seeing and having their screens in VR. Having it be fully manipulative, you’re like the Jedi in the force. You can push and pull your screens with hand tracking, to pinch and expand. Put them all around you. If I’m answering your question, Brandel, we’re still exploring a lot of paradigms. But we found that it’s surprising how focused people are getting, which is awesome and encouraging. We find, which isn’t surprising as much anymore, companies, organizations, and teams are always very wild at how connected they feel to each other. So we always try to encourage people to work together. So, even on our elite tier, which is just our middle tier, like a pro think of it as a pro solo user, you have the ability to collaborate with up to four people in a private room. But we also have public spaces, where people can hang out and it’s free to use. Just think of it as a virtual coffee shop. You can hang out there, and meet with people. You can’t share your screens, obviously, for security reasons. But you can meet new people and collaborate. And it’s been cool to see how we’ve informed our own community where people can be connected with each other to be able to hang out and meet new people. So, hopefully, that answers a little bit of your question. There’s still a lot more we’re learning about the paradigms of working in 2D screens, and what people prefer, what’s the best practice.
Brandel Zachernuk: Yeah. One of the issues that I face when I think about where people can expect to be in VR productivity at this point, is the fact that Quest 1, Quest 2 and Vive, all of these things have a focal distance. Which is pretty distant, normally a minimum accommodation distance is about 1.4 meters, which means that anything that’s at approximately arm’s length. Which is where we have done the entirety of our productivity in the past. Is actually getting to within eye strain territory. The only headset that is out on the market that has any capacity for addressing that kind of range is actually the Magic Leap. Which I don’t recommend anybody pursue, because it’s got a second focal plane at 35 centimetres. Do you know where people put those panels on Quest? On Vive? I don’t know if you’ve got folks in a crystal or a coral value, whether that has any distinction in terms of where they put them? Or alternatively, do you recommend or are you aware of anybody making any modifications for being able to deal with a closer focal distance? I’m really interested in whether people can actually work the way they want to, as a consequence of the current limitations of the hardware at the moment.
Gavin Menichini: Yeah. There are a few things in response to that. One: We’ve actually found, internally, even with the Quest 2, although the screen distance, et cetera, focal point, is a challenge, we’ve actually found that people in our experience are reporting less eye strain working in VR, than they are working from their computer. We’re candidly still trying to figure out why that’s the case. I’m not sure how the distance and the optics games that they’re playing in the Quest 2 and other headsets we use. But we’ve actually found that people are reporting less eye strain, just solely on customer reviews and feedback. So we haven’t done any studies. I personally don’t know a lot around IPDs and focal length distance of the exact hardware technology of all the headsets on the market. All I’m doing is paying attention to our customers, what they’re saying, and our users. And we’re actually, surprisingly, not getting that much eyestrain. We’ve actually said that a lot of people say they prefer working in VR than from their computers, without even blue light glasses. And they’re still getting less eye strain. So, the science and technicalities of how it’s working, I’m not sure. It’s definitely out of my realm of expertise. But I can assure you that the hardware manufacturers, because of our close relationship with Meta, HTC, they’re constantly thinking about that problem too, because you’re strapping an HMD to your face, how do you have a good experience from a health standpoint for your eyes?
Brandel Zachernuk: Do you know how much time people are clocking in it?   
Gavin Menichini: On average, our first user session is right around an hour 45 minutes to two hours. And we have power users who are spending six to eight hours a day inside of Immersed, clocking that much time in and generating getting value out of it. And it’s consistent. And I’m not sure what our average session time is. I would say it’s probably around an hour, two hours. But we have people who use it for focus first, where they want to go and focus sessions on Immersed, or people will spend four or five hours in it, and our power users will spend six, seven, eight hours.
Frode Hegland: I can address these few points. Because, first of all, it’s kind of nice. I don’t go on Immersed every week, but when I do, I do get an email that says how many minutes I spent in Immersed, which is quite a useful statistic. So, I’m sure, obviously, you guys have more on that. When it comes to the eye strain, I tend to make the monitor quite large and put it away to do exactly the examination you’re talking about, Brandel. And I used to not like physical monitors being at that distance. It was a bit odd. But since I am keyboard, trackpad, where I don’t have to search for a mouse, I don’t need to see my hands anyway, even though I can. I do think that works. But maybe, Gavin, would you want to, you said you had a video to share a little bit of what it looks like?
Gavin Menichini: Sure, yeah. I can pull that up real quick. So it’s a quick marketing demo video, but it does do a good job of showcasing the potential of what’s possible. And I’m not sure if you guys will be able to hear the audio. It’s just fun background music. It’s not that important. The visuals are what’s more important. Let me go ahead and pull this up for us real quick.
Frode Hegland: I think you can just mute the audio and then talk if you want to highlight something, I guess.
Gavin Menichini: Okay. Actually, yeah. That’s probably a good idea. So, this is also on YouTube. So just for each of your points, if you guys are curious and want to see more content, just type in Immersed VR on YouTube. Our Immersed logo is pretty clear. Our content team and marketing team put out a lot of content, so if you’re curious. We also have a video called “Work in VR, 11 tips for productivity”, where a head of content goes through some different pro tips. If you’re curious and just want to dive in more of a more nuanced demo of how you do things, etc, to see more of the user experience. So, this is a good, helpful high level video. So you can see you can have full control of your monitor. You can make it ginormous, like a movie screen. We have video editors, day traders, finance teams, and mostly developers are our main customer base. As you can see here, the user just sitting down at the coffee table, the keyboard is tracked. We also have a brand new keyboard feature coming out, it’s called keyboard passthrough, where we’ll leverage the cameras of your Oculus Quest to hold the VR and see your real-life keyboard, which we’re very excited about. And here you can just see just a brief collaboration session of two users collaborating with each other side by side. You can also incorporate your phone into VR, if you want to have your phone there. And then, here you’ll see what it looks like to have a meeting in one of our conference rooms. So, you can have multiple people in the room, we usually had 30 plus people in an environment, so it can easily support that. It also depends on, obviously, everyone’s network strength and quality, very similar to Zoom, or phone call. And that shows how quality the meeting is from their audio and screen sharing input, but if everyone’s on a good network quality, that’s not an issue. And then, lastly here, you can see one of our users with five screens, working in a space station. And that’s about it. Any questions or things that stood out from that, specifically?
Frode Hegland: Yeah. A question about the backgrounds. You have some nice environments that can be applied. I think we can also import any 360 images, is that right, currently? And if so, can we also load custom 3D environments in the future? Are you thinking about customization for that aspect of it?
Gavin Menichini: Yes. So, we are thinking about it, and we do have plans for users to incorporate 3D environments. There are a few challenges with that, for a few obvious reasons, which I could touch on a second. But we do support 360 environments, 360 photos for users to incorporate. And we also have a very talented artist and developer team that are constantly making new environments. And we have user polls, and we figure out what our users want to build and what they’d like to see. And as we, obviously, continue to grow our company, right now we’re in the process of fundraising for a series, and once we do that, we’re hoping to go from 27-28 employees right now, to at least 100 by the end of the year. The vast majority of them will be developers to continue to enhance the quality of our product. And then, we also will support 3D imports of environments. But because the Quest 2 has some compute limitations, we have to make sure that each of our environments have specific poly counts, and specific compute measurements, so that the Quest 2 won’t explode if they try and open that environment in Immersed, as well as making sure that your Immersed experiences can be optimized in high quality and not going to lag, et cetera. So right now, we’re thinking: How do we enable our users to build custom environments? And then, two: How do we make sure they meet our specific requirements for the Quest 2. But naturally, over time, headsets are getting stronger, computing powers are getting better. Very similarly when you go from a Nintendo 64 graphics, to now the Xbox Series X. The ginormous quality. Headset quality will be the same. So, we’ll have more robust environments to have some more, give and take optimizations for environments our users give to us. So it isn’t our pipeline, but we’re pushing it further down the pipeline than we originally wanted. Just doe to some natural tech limitations. And also the fact that we are an adventure back startup, and we have to be extremely careful of what we work on, and optimize for the highest impact. But we’re starting to have some more fun and having some traction in our series A conversations. And hopefully have some more flexibility, financially, to continue pushing.
Frode Hegland: Thank you. Alan?
Alan Laidlaw: Yes. So, this is maybe a, kind of, Twilio-esque question about the design material of network strength bandwidth and compute, like you mentioned. And I’m wondering, I saw in the demo, the virtual keyboard that, of course, the inputs would be connected to a network versus a physical keyboard that you already have in front of you, if it were possible to use the physical keyboard and have those inputs go into the VR environment, or AR environment, in this case, would that be preferred? Is that the plan? And if so, you know, that opens up, I mean, this is such a rich pioneer, as you mentioned, territory, so many ways to handle this. Would there be a future where, if my hands are doing one thing, then that’s an indication that I’m in my real world environment, but if I hand at something else and that’s suggesting, you know, take my hand into VR, so I can manipulate something? I’m curious about. Any thoughts about, essentially, that design problem, versus the hard physical constraints of bandwidth? Is it just easier? Does it make a better experience to stick with a virtual keyboard for that reason? So, you don’t, at least, have a disconnect between real world and VR? And I’m sure there are other ways to frame that question.
Gavin Menichini: No, that’s fine. And I can answer a few points and a few follow up questions to make sure I understand you correctly. For the keyboard, specifically, the current keyboard tracking system we have in place is not optimal. It was just the first step of what we wanted to build to help make the typing VR problem easier, which is our biggest request. So we are now leveraging, I think, a way stronger feature, which is called “Keyboard pass-through”. So, for those who you know, the Oculus Quest 2 has a pass-through feature, where you can see the real world around you through the camera system, and they’re stitching the imagery together. We now have the ability to create a pass-through portal system, where you can cut out a hole in VR over your keyboard. So, whatever keyboard you have, whether it’s Mac, Apple, whatever. The funky keyboards, that a lot of our developers really like to use for a few reasons, you can now see that keyboard in your real hands through a little cut-out in VR. And then, when it comes from inputs, of what you mentioned of doing something with your hands, it being a real life thing versus VR thing. Are you referring to that in regards to having a mixed reality headset where it can do AR and VR and you want to be able to switch from real world to VR with the hand motion?
Alan Laidlaw: Yeah. A piece of my question. I can clarify. I am referring to mixed. But specifically where that applies is the cut-out window approach, is definitely a step in the right direction. But it seems that’s still based entirely on the Oculus understanding of what your fingertips are doing. Which will obviously have some misfires. And that would be an incredibly frustrating experience for someone who’s used to a keyboard always responding, hitting the keys that you’re supposed to be hitting. So, at some point, it might make more sense to say, “Okay, actually we’re going to cut out. We’re going to forget the window approach and have the real input from the real keyboard go into our system”. 
Gavin Menichini: So, that’s what it is, Alan. Just to further clarify, we always want our users to use their real hands on the real keyboard. And you’re not using your virtual hands on a virtual keyboard. You’re now seeing, with pass-through, your real hands and your real keyboard, and you’re typing on your real keyboard.
Frode Hegland: A really important point to make in this discussion is, if for a single user, there are two elements here: There is the thing around you image of 3D, and then you have your screen. But that is the normal Mac, Linux or Windows screen. And you use your normal keyboard. So, I have, actually, used my own software. I’ve used Author to do some writing on a big nice screen, so it is exactly the keyboard I’m used to.
Alan Laidlaw: Right. So, how that applies to the mixed reality question is, if I’m using the real keyboard, have the real screen, but one of my screens is an iPad, a touch screen, that’s in VR, where I want to move some elements around, how do I then, transition from my hands in the real world to now I want my hand to be in VR?
Gavin Menichini: So, you’re going to be in Immersed, as of now. You’re going to be in VR, and you’re going to have a small cut out into the real world. And so, it’s just, right here is a real world, through a cutout hole, and then, if you have your hands here, and you want to move your hands into here, the moment your hands leave the pass-through portal in VR, it turns into virtual hands. And so, to further clarify, right now, your virtual hands, you have in hand tracking, will still be over your hands on the pass-through window. We’re experimenting taking that out for further clarity of seeing your camera hands on your keyboard. But, yes. When you’re in Immersed, it’ll transition from your camera hands, real life hands, to virtual hands. If you have an iPad and you want to swipe something, whatever, it’s that’s seamless. But then, for mixed reality dynamics, in the future, we’re not sure what that’s going to look like, because it’s not here yet. So, we need to experiment, figure out what that looks like.
Frode Hegland: Fabien?
Fabien Benetou: Yeah, thank you. It’s actually a continuation of your question because you asked about the background environment using 360, and including the old model. It’s also a question that you know I was going to ask, and I guess Gavin did, because I’m a developer, you can imagine it too. If it’s not enough, if somehow there are features that I want to develop, and they are very weird, nobody else will care about it, and, as you say, as a start-up you can’t do everything, you need to put some priorities. What can I do? Basically, is it open source? if not, is there an API? If there is an API, what has the community built so far?
Gavin Menichini: Yeah, great question. So, as of now, we currently don’t have any APIs or open SDKs, open source code for users to use. We’ve had this feature request a lot. And our CEO is pondering what his approach wants to be in the future. So, we do want to do something around that in the future. But, because we’re still so early stage, and we have so many things we have to focus on, it’s extremely important that we’re very careful with what we work on, and how focused, and how hard working we are towards those. As we continue to progress as a company, and as our revenue increases, as we raise subsequent rounds of funding, that gives us the flexibility to explore these things. And one of the biggest feature requests we’ve had is having an Immersed SDK for our streaming monitor technology so people can start to play with different variations of what we’re building. But I do know that Renji does not allow for any free, open source coding work whatsoever. Just for a few reasons legality-wise, and I think we had a few experiences in the past where we experiment with that, and it backfired to where developers were claiming they owed, they deserved equity, or funding. It was a hot mess. So, we don’t allow anyone to work for us for free, or to give us any form of software, to any regard, any work period, to prevent any legal issues, to prevent any claims like that ,which is kind of unfortunate. But he’s a stickler and definitely will not budge on that. But in the future, hopefully, we’ll have an SDK or some APIs that are opened up, or open source code, once we’re more successfully established for people to experiment and start making their own fun iterations to immerse on. 
Brandel Zachernuk: I have a question about the windows. You mentioned that, when somebody has a pro subscription, they can be socially connected, but not share screens. I presume, in an enterprise circumstance, people can see each other’s windows. Have you observed any ways in which people have used their windows more discursively, in terms of having them as props, essentially, for communicating with each other, rather than primarily, or solely for working on their own? The fact that they can move these monitors, these windows around, does that change anything about the function of them within a workflow or a discussion context?
Gavin Menichini: Yeah. So, to clarify under the tier and your functionality. We have a free tier, where you can connect your computer and traverse the gap. You get one free virtual display. You cannot, on a free tier, ever share screens in all of our public rooms. You can’t share screens, regardless of your license. Here, the only place you can share screens is in a private collaboration room. Which means, you have to be on our elite tier, or a teams tier. On our elite tier, which is our mid-pro-solo tier, you can have up to three other people in the room with you, four total, and you can share screens with each other. And the default is, your screens are never shared. So, if you have four people in a room, and they each have three screens up, you cannot see anyone else’s screen until you voluntarily share your screen and confirm that screen. And then, it will highlight red, for security purposes. But if you’re an environment where, Brandel, you wanted to share your screen, when you share your screen and say, we’re all sitting at a conference room table, if I have my screens like, one, two, three, right here, and I share my middle screen, my screen is then going to pop up in your perspective to you. To where you have control of my shared screen. You can make it larger. Make it bigger. Shrink it, etc. And we’re also going to be building different environment anchors to where say, for example, in your conference room, and in a normal conference room you have a large tv on the wall, say, in virtual reality, you could take your screen and snap it to that place, and once it’s snapped into that little TV slot, that screen will be automatically shared and everyone sees it at that perspective, rather than their own perspective. And then, from a communication standpoint, we have teams who will meet together in different dedicated rooms, and then they’ll share screens, and look at data together. There’s… I can’t remember quite the name, it’s a software development team where something goes down, they have to very well come together. Devops teams come together, they share screens looking at data to fix a down server or something, and they can all see, and analyse that data together. And we’re exploring the different feature adds we can add to make that experience easier and more robust.
Brandel Zachernuk: And so, yeah. My question is: Are you aware of the ways in which people make use of that in terms of being able to share and show more things? One of the things about desktop computing, even in the context where people are co-located, co-present in physical meet space, you don’t actually have very good performability of computer monitors. It kind of sucks in Zoom. It kind of sucks in real life, as well. Do people show and share differently, as a consequence of being in Immersed? Can you characterize anything about that?
Gavin Menichini: Yes. So, the answer is yes. They have the ability to share more screens, and so, in meet space, in real-world, a funny term there for meet space, but. You can only have one computer screen if you’re working on a laptop, and that’s frustrating. Unless you have a TV, you have to airdrop, XYZ, whatever. But, in Immersed, you have up to five screens. And so, we have teams of four, and they’ll share two or three screens at once, and they can have a whole arrangement of data, 10 screens are being shared, and they can rearrange those individually so it all pops up in front of them, and then, they all rearrange them in order that they want, and they can all watch a huge sharing screen of data. That is not possible in real life because of the technology we provide to them. And then, there’s different iterations of that experience where, maybe, it’s two or three screens, it’s here, it’s there. And so, because of the core tech that we have where you can have multiple screens and then share each of those, that opens up the possibility for more data visualization, because you have more screen real estate. This opportunity to collaborate more effectively, and if you had one computer screen on Zoom, which as you mentioned, is challenging, or even in real life, because in real life you could have a computer and two TVs, but in Immersed you could have eight screens being shared at once. 
Brandel Zachernuk: And do you share control? Is it something where it’s only the person sharing it has the control, so other people would have read-only access? Or do you have the ability for people to be able to pass that control around? Send the user events such that everybody would be able to have shared control?
Gavin Menichini: So, not right now, but we’re building that out. For the time being, we want everyone just to use collaboration tools they are currently using. Use Google Docs. Use Miro. Use Slack. Whatever. So, the current collaboration documents you guys are using now, we just want to use those applications on Immersed, because whatever you can run on your computer, you can run on your screen in Immersed. It is just your computer in Immersed. So, we tell people to do that. But now they get the added benefit of deeper connection. Just actually to be sitting next to your employee, or your colleague and then, now you can have multiple screens being shared. So, now it’s like a supercharged productivity experience, collaboration experience. Any other questions? I have about four minutes left, so I want to make sure I can answer all the questions you guys have.
Fabien Benetou: I’ll make a one minute question. I’ll just say faster. If I understood correctly, the primitive is the screen. But is there anything else beyond the screen? Can you share 3D assets? Would the content can be pulled from the screen? If not, can you take capture of the screen. either as image, or video? And is it the whole screen only or part of the screen? And imagining you’ve done that, let’s say, part of the screen as a video of 30 seconds, can you make it permanent in the environment so that if I come back with colleagues tomorrow? Capture? Because that’s the challenge we have here all the time, we have great discussions and then, what happens to the content?
Gavin Menichini: So, it’s in our pipeline to incorporate other assets that will be able to be brought into Immersed, and then remain persistent in the rooms. So, we’ve created the technology for persistent rooms, meaning, whatever you leave in there, it’s going to stay. Very similar to a conference room that you’ve dedicated for project. You put post notes around the wall, and obviously, come back to it the next day. So there same concept when in VR. And then, we also have plans to incorporate 3D assets, 3D CAD models, et cetera, into Immersed. But because you have your screens and teams are figuring out how to collaborate on 2D screens, we’re just, for the time being, we’re saying just continue to use your CAD model software on your computer 2D. But in the future we’ll have that capability. We also don’t want to be like F3D modelling VR software. So, we’re trying to find that balance. Which is why it’s been de-prioritized. But it is coming. And hopefully, in 2022 and then, we have also explored having video files that are in form of screens, or an image file, or post-it notes, We’re also going to improve our whiteboard experience, which is just some of one of our first iterations. And so, there’s a lot of improvements we’re going to be making in the future, in addition to different assets, photos, videos, 3D modelling software, et cetera. We’ve had that request multiple times and plan on building it in the future.
Fabien Benetou: Oh, and super quick. It means you get in, you do the work, you get out, but you don’t have something like a trace of it as is right now?
Gavin Menichini: As in persistence? As in you get in, you leave your screens there?
Fabien Benetou: Or even something you can extract out of it. Frode was saying that, for example, he gets an email about the time he spent on a session, but is there something else? Again, because usually, you have maybe another eureka moment, but you have some kind of realization in the space, thanks to the space and the tools. And how can you get that it’s really a struggle.
Gavin Menichini: I’m not sure, I’m sorry. I’m not sure I’m understanding your question correctly, but well, so it’s…
Brandel Zachernuk: Maybe I can take a run of it. So, when people play VR games, at a VR arcade, one of the things that people will often produce is a sizzle reel of moments in that action. There’s a replay recording, an artifact of the experience. Of that process.
Gavin Menichini: Okay, yes. So, for the time being there is no functionality in Immersed for that. But Oculus gives you the ability to record what you’re watching in VR. And you can pull that out and take that experience with you, as well as take snapshots. And then, we have no plans on incorporating that functionality into Immersed because Oculus has it, and I think HTC does, and other hardware manufacturers will provide that recording experience for you to then take away with you.
Frode Hegland: Thank you very much, Gavin, a very interesting, real-world perspective on a very specific issue. So, very grateful. We’ll stay in touch. Run to your next meeting. When this journal issue is out, I’ll send you an update.
Gavin Menichini: Thank you, Frode. It was a pleasure getting to chat with each of you. God bless. Hope you guys have a great Friday, weekend, and we’ll stay connected.
Frode Hegland: You too. Take care, bye. 
Gavin Menichini: Thanks, y’all. 
Brandel Zachernuk: I’m going to drop at some point, as well. My Fridays are missing the Near Future Laboratory chats from joining the second hour of this. So, I want to make sure that I keep my hand in that community as well, because they’re very interesting people too.

Further Discussion

Frode Hegland: Oh, okay. That sounds interesting. Yeah, we can look at changing times and stuff. So, briefly on this, and then on the meeting that I had with Elliot earlier today. This is interesting to us, because they are thinking a lot less VR than we are. But it is a real and commercial company and obviously a lot of his words were very salesy. Which is fine. But it literally is, rectangle in the room. That’s it. So, in many ways, it’s really, phenomenally, useful. And I’m very glad they’re doing it. I’m glad we have a bit of a connection to them now. But the whole issue of taking something out of the screen and putting it somewhere else, it was partly using their system that made me realize that’s not possible. And that’s actually kind of a big deal. So that’s that. And the meeting that Elliot and I had today, he mentioned who it was with. And I didn’t want to put too much into the record on that. But it was really interesting. The meeting was because of Visual-Meta. Elliot introduced us to these people. And Vint. Vint couldn’t be there today. We started a discussion. They have all kinds of issues with Visual-Meta. They love the idea, but then their implementation issue, blah, blah, blah. But towards the end, when I started talking about the Metaverse thing, they had no idea about the problems that we have learned. And they were really invigorated and stressed by it. So, I think what we’re doing here, in this community, is right on. I’m going to try now to rewrite some of the earlier stuff, to write a little piece over the weekend on academic documents in the Metaverse to highlight the issues. And if you guys want to contribute some issues to that document, that would be great or not, depending on how you feel. But I think they really understood that, what I said to them at the end is, if you have a physical meeting of a piece of paper, you can do whatever you want. But in the Metaverse, it can only do with the document, whatever the room allows you to, which is mind-blowingly crazy. And they represent a lot of really big publishers within medicine. They are under the National Institute of Health, as I understand. I’m not sure if Elliot is still in the room. So, yeah. It is good that we are looking in the right areas. 
Brandel Zachernuk: Yeah, that’s really constructive. For my part, one of the things that I’ve realized is that the hypertext people, the people who understand the value of things, like structured writing, and relationship linking, and things like that, are far better positioned than many, possibly most, to understand some of the questions and issues that are intrinsic to the idea of a Metaverse. I was watching, so I linked a podcast to some folks, it’s called, I think is it called Into The Metaverse, but it was a conversation between a VP of Unreal and the and the principal programmer, whatever, architect of Unity. So Vladimir Vukićević, who was who created Unreal and Unity, and Vukićević, I don’t know if I’m garbling that name, he was the inventor of WebGL. Which is the foundation for all of the stuff that we do in virtual reality on web, as well as just being very good for being able to do fancy graphics, as I do at work and things like that. But their view of what goes into a Metaverse what needs to be known about entities relationships descriptions and things was just incredibly naive. I’ll link the videos, but they see the idea of a browser as being intrinsic. And another person, who’s a 25-year veteran of Pixar and the inventor of the Universal Scene Description format, USD, which as you may know, Apple is interested in, sort of, promoting as being useful in the form of what this format of choice for augmented reality, quick look files, things like that. And again, just incredible naivete in terms of what are important things to be able to describe with regard to relationships, and constraints, and linkages of the kind that hypertext is. It’s the bread and butter of understanding how to make a hypertext relevant notionally and structurally, in a way that means that it’s (indistinct). So, yeah. It’s exciting, but it’s also distressing to see how much that thinking of people who are really titans of an interactive graphics field don’t know what this medium is. So, that looks fun.
Frode Hegland: Yeah, it’s scary and fun. But I think we’re very lucky to have Bob here, because I’ve been very about the document and so on, and for about to say, “Well, actually, let’s use the wall as well”. It helps us think about going between spaces. And what I highlighted in the meeting earlier today was, what if I take one document from one repository, and let’s say, it has all the meta, so I’ve put a little bit here, a little bit there, but then, I have another document, from a different repository over here and I draw a connection between them. That connection now is a piece of information too. Where is stored? Who owns it? And how do I interact with that in the future? These are things that are not even begun to be addressed, because I think, all the companies doing the big stuff just want everything to go through their stuff.
Bob Horn: And what kind is it? That is the connection.
Frode Hegland:  Yeah, exactly. So, we’re early naive days, so we need to produce some interesting worthwhile questions here. Fabien, I see your big yellow hand.Video:
Fabien Benetou: I’ll put the less yellow hand on the side. Earlier when I said, I don’t know what I’m doing, it wasn’t like fake modesty or trying to undermine my work or this kind of thing. I actually mean it. I do a bunch of stuff and some of the stuff I do, I hope is interesting. I hope is even new, and might lead to other things. But in practice, it’s not purely random, and there are some let’s say, not heuristic, but there are some design principles, philosophy behind it, understanding of some, hopefully, core principle of urology, or cognitive science, or just engineering. But in practice, I think we have to be humble enough about this being a new medium. And figuring it out is not trivial, it’s not easy, and it’s not, I think, it is part of it, is intelligence and knowledge, but a lot of it is all that, plus luck, plus attempting.
Frode Hegland: Oh, I agree with you. And I see that in this group, the reason I said it was I just wanted him to have a clue of the level of who we are in the room. That’s all. I think our ignorance in this room is great. I saw this graphic when I started studying, I haven’t been able to find the source, but it showed if you know this much about a subject, the circumference is the ignorance, it’s small. The more you know, the bigger circumference it is. And I found that to be such a graphic illustration of, you know something, you don’t know. We need to go all over the place. But at least we’re beginning to see some of the questions. And I think that’s a real contribution of what we’re doing here. So, we just got to keep on going. Also, as you know, we now have two presenters a month, which mean, for the next two or three months, I’ve only signed up one. Brandel is going to be doing, hopefully, in two to three weeks something, right?
Brandel Zachernuk: Yeah. I’m still chipping away. Then I realized that there’s some reading I need to do, in order to make sure that I’m not mischaracterizing Descartes.
Frode Hegland: Okay, that sounds like fun. Fabien, would you honour us, as well, with doing a hosted presentation over the next month or two or something?
Fabien Benetou: Yeah, with pleasure.
Frode Hegland: Fantastic! Our pathetic little journal is growing slightly less pathetic by the month.
Fabien Benetou: I can give a teaser on… I don’t have a title yet, but let’s say, how a librarian, what a librarian would do if they were able to move walls around.
Frode Hegland: That’s very interesting. It was good the one we had on Monday, with Jad. It was completely different from what we’re looking at. Looking at identity. And for you to now talk about that aspect, is kind of a spatial aspect, that’s very interesting.
Bob Horn: I’m looking forward to whatever you write about this weekend, Frode. Because for me, the summaries of our discussions, with some organization, not anywhere near perfect organization, not asking for that, but some organization, some patterns are what are important to me. And when I find really good bunches of those, then I can visualize them. So, I’m still looking for some sort of expression of levels of where the problems are as we see it now. In other words, there were the, what I heard today, with Immersed, was a set of problems at a certain level, to some degree. And then, a little bit in the organization of knowledge, but not a lot, but that’s what came up in our discussion afterwards and so forth. So, whenever there’s that kind of summary, I really appreciate  whatever you do in that regard, because I know it’s the hardest work at this stage. So I’m trying to say something encouraging, I guess.
Frode Hegland: Yeah, thank you, Bob. That’s very nice. I just put a link on this document that I wrote today. The next thing will be, as we discussed. But information has to be somewhere. It’s such an obvious thing, but it doesn’t seem to be acknowledged. Because in a virtual environment, we all know that you watch a Pixar animation, they’ve made every single pixel on the screen. There is no sky even. We know that. But when it becomes interactive, and we move things in and out. Oh, Brandel had a thing there.
Brandel Zachernuk: One of the things that they that Guido Quaroni talks about, as as well as people have talked a bunch about, some of the influences and contributions of. Quilez makes Shadertoy, I don’t know if you’ve ever seen them or heard of that. But it’s this raymarched based fragment shader system for being able to do procedural systems. And so, none of the moss in brave, if you’ve seen that film, exists. Nobody modeled it. Nobody decided which pieces should go where. What they did was, Quilez has this amazing mind for a completely novel form of representation of data. It’s called the Signed Distance Fields raymarched shader. And so it’s all procedural. And all people had to do was navigate through this implicit virtual space to find the pieces that they wanted to stitch into the films. And so, it never existed. It’s something that was conjured on a procedural basis and then people navigated through it. So yes, things have to exist. But that’s not because people make it, sometimes. And sometimes it’s because people make a latent space, and then, they navigate it. And I think that the contrast between those two things is fascinating, in terms of what that means creative tools oblige us to be able to do. Anyway.
Frode Hegland: Oh, yeah. Absolutely. Like No Man’s Sky and lots of interesting software out there. But it’s still not in the world, so to speak. One thing I still really want, and I’m going to pressure you guys every time, no, it’s not to write your bio, but it is some mechanism where, as an example, our journal, I can put it in a thing so that you guys can put it in your thing. Because then we can really start having real stuff that is our stuff. So if you can keep that in the back of your mind. Even if you can just spec how it should work, I’ll try to find someone to do it, if it’s kind of rote work and not a big framework for you guys.
Brandel Zachernuk: Yeah, I definitely intend to play more with actually representing text again. And somebody made a sort of invitation slash prompt blast challenge to get my text renderings to be better. Which means that I’ll need something to do it better on. And so, yeah. I think that would be a really interesting target goal.
Frode Hegland: Awesome. Fabien, I see you have your hand, but on that same request to you guys, imagine we already have some web pages where you can click at the bottom, view in VR, when you’re in the environment. That’s nice. Imagine if we have documents like that, that’ll be amazing. And I don’t know what that would mean, yet. There are some thoughts, but it goes towards the earlier. Okay, yes. Fabien, please?
Fabien Benetou: Yeah, I think we need to go a bit beyond imagining. Then we can have some sandbox, some prototypes of the documents. We have recorded, that’s how I started, the first time I joined, you mentioned Visual-Meta. And then, I put a PDF and some of the media data in there. No matter how the outcome was gonna exist, so I definitely think that’s one of the most interesting way to do it. The quick word on writing, my personal fear about writing is that, I don’t know if you know the concept, and I have the name of the people of my tongue, but yeah, ID Depth. So the idea is that you have too many ideas, and then at some point, if you don’t realize some of them, if you don’t build, implement, make it happen, however the form is, it’s just crushing. And then, let’s say, if I start to write, or prepare for the presentation I mentioned just 30 minutes or 10 minutes ago, the excitement and the problem is, it’s for sure, by summarizing it, stepping back, that’s going to bring new ideas. Like, “Oh, now I need to implement. Now I need to test it”. There is validation on it. I’m just not complaining or anything. Just showing a bit my perspective of my fear of writing. And also because in the past, at some point I did just write. I did not code anything. It felt good in a way. But then also. a lot of it was, I don’t want to say bullshit but, maybe not as interesting as that or it was maybe a little, so I’m just personally trying to find the right balance between summarizing, sharing, having a way that the content can be reused, regardless of the implementation, any implementation. Just sharing my perspective there.
Frode Hegland: That is a very important perspective. And it is very important to share. And I think we’re all very different in this. And for this particular community, my job as, quote-unquote editor, is to try to create an environment where we’re comfortable with different levels. Like Adam, he will not write. Fine. I steal from Twitter, put it in the journal, and he approves it. Hopefully. Well, so far he has. So, if you want to write, write. But also, I really share, so strongly, the mental thing you talked about. We can’t know what it’s like to hear something until it exists. And we say, if an idea is important write it down, because writing it down, of course, helps clarifying. But that’s only if it’s that kind of an idea. Implementing, in demos and code is as important. I’ve been lucky enough to be involved with building our summer house, in Norway, doing a renovation here. And because it’s a physical environment, even doing it in SketchUp it’s not enough. I made many mistakes. Thankfully, there were experienced people who could help me see it in the real thing. Sometimes we had to put boards up in a room to see what it would feel like. So, yeah. Our imaginations are hugely constrained. So, it’s now 19 past. And Brandel was suggesting he had to go somewhere else. I think it’s okay, with a small group, if we finish half-past, considering this will be transcribed, anyway. And so, let’s have a good weekend. Unless someone wants a further topic discussion, which I’m totally happy with also.
Brandel Zachernuk: Yeah. I’m looking forward to chatting on Monday. And I will read through what you sent to the group that you discussed things with today. Connecting to people with problems that are more than graphical, and more than attends to the Metaverse, I think is really fascinating. Providing they have the imagination to be able to see that, what they are talking about is a “Docuverse”. Is these sort of connected concepts that Bob has written about. I’ve got a book but it’s on the coffee table. The pages after 244. The characterization of the actual information and decision spaces that you have. It’s got the person with the HMD but then it’s sort of situated in an organization where there are flows of decisions. And I think that, recognizing that we can do work on that is fascinating. 
Bob Horn: I can send that to everybody, if you like.
Frode Hegland: Oh, I have it. So without naming names or exactly who I was speaking to today since we’re still recording. The interesting thing is, of course, this feeds the, starting with the Visual-Meta, it feeds into some part of the organization desperately wants something like that and they’ve been pushing for years. But there are resources, and organization, and communication, all those real-world issues. So then, a huge problem is, I come in as an outsider and I say, “Hey, here’s a solution. It’s really cheap and simple”. It’s kind of like I’m stealing their thunder, right? I am not doing that, I’m just trying to help them realize what they already want to do. And today, when they talked about different standards, I said, “Look. Honestly, what’s in Visual-Meta, I don’t care. If you could, please, put it in BibTeX, the basic stuff, but if you want to have some json in there, it’s not something I would like, but if you want to do it there’s nothing wrong with that”. So, to try to make these people feel that they are being enabled, rather than someone kind of moving them along is emotionally, human difficult. And also, for them to feel that they’re doing something with Vint Cerf. All of that, hopefully, will help them feel a bit of excitement. But I also think that the incredibly hard issues with the Metaverse that we’re bringing up also unlock something in their imagination. Because, imagine if we, at the end of this year, we have a demo, where we have a printed document, and then we pretend to do OCR, we don’t need to do it live, right? And then, we have it on the computer, very nice. And now, suddenly, we put on a headset. You all know where I’m going with this, right? We have that thing. But then, as the crucial question you kept asking Gavin, and I’m glad you both asked it, Fabien and Brandel, what happens to the room when you leave it? What happens to the artifacts and the relationship if we solve some of that? What an incredibly strong demo that would be. And also, was it a little bit of a wake-up call for you guys to see that this well-funded new company is still dealing with only rectangles?
Brandel Zachernuk: No. I know from my own internal experience just how coarse the thinking is, even with better funding.
Frode Hegland: Yeah. And the greatest thing about our group is, we have zero funding. And we have zero bosses. All we have is our honesty, community, and passion. Now, it’s a very different place to invent from. But look at all the great inventions. Vint was a graduate student, Tim Berners-Lee was trying to do something in a different lab. You know all the stories. Great innovations have to come from groups like this. I don’t know if we’re going to invent something. I don’t know. I don’t really care. But I really do care, desperately, that we contribute to the dialogue.
Brandel Zachernuk: Yeah, I think that’s valuable. I think that the fact that we have your perspective on visual forms of important distilled information thought is going to be really valuable. And one of the things I’d like to do, given that you said that so many people make use of Vision 2050 is start with that as a sculpture, as a system to be able to jump into further detail. Do you have more on that one? 
Bob Horn: Well, I can take it apart. I can do what different things we want to do with it. For example, when we were clearing it with the team that worked that created some of the thought that went into it, the back cast thought, I would send the long trail of the four decades of transportation to Boeing, to Volkswagen, and to Toyota. I didn’t send it to the rest of the people. So, I could take that, I actually took that out and sent a PDF of that, only that to them. And that’s one dimension. Another dimension is that five years later, I worked on another project that was similar called Poll Free. Which is also on my website. And it narrowed the focus to Europe, to the European Union, rather than the whole world. But the structure is similar in many ways. So each one of those are extractable. Then also, I have a few…  The two or three years after working on the Vision 2050, I would give lectures of different kinds. And people would ask me, “Well, how are we doing on this or that requirement?” And so, I would try to pull up whatever data there was, two, or three, or four years later, and put that in my slides, so there, that material is available. So, that we can extract, you could demo, at least that, “Here’s what we thought in 2010 and here’s what it looked like in 2014”. For one small chunk of the whole picture. So, yeah. And I have several, maybe I don’t know, six or eight, at least of those, that where I could find data easily and fast. So, there’s a bit of demo material there that one could portray a different kind of a landscape than the one that you were pointed out just a minute ago.
Brandel Zachernuk: Yeah. That would be really interesting to play with. I was just looking to add some of the things. I think that the one thing that I had seen of the Vision 2050 was the fairly simple one, it’s a sort of a four, this node graph here, the nine billion people live well and within the limits of the planet I hadn’t seen yet. The sustainable pathway toward a sustainable 2050 document that you linked here on your site, which has a ton more information. And, yeah. One of the things that I’m curious about, one of the things that I think I will do to play with it first is actually get it into, not into a program that I write, but into a 3D modelling APP, to tear it apart, and think about the way in which we might be able to create and distribute space for it. But first, do you have thoughts about what you would do if this was an entire room? It obviously needs to be a pretty big mural, but if it was an entire room, or an entire building, do you have a sense of the way in which it would differ?
Bob Horn: Until you ask the question, and put it together with the pages from the old book, I haven’t really thought of that. But from many of the places in Vision 2050 one would have pathways like this. This was originally a pert chart way back when that I was visualizing, because I happened to have, early my career edited a book on pert charts for Dupont. And so, that’s a really intriguing question. To be extracting in and laying it out and then, connecting those and also flipping the big mural, the time-based mural in Vision 2050, making that flat, bringing different parts of it up, I think would be one of the first ways that one would try to explore that, because then, one could (indistinct) pathways, and alternatives, and then linkages. So, they’re different. Depending on one’s purpose, thinking purpose, one would do different things.
Fabien Benetou: Brief note here. I believe, using Illustrator to make the visuals, I believe Illustrator can also save to SVG. And SVG then can be relatively easily extruded to transform a 2D shape into a 3D shape. Honestly, doing that would be probably interesting but very basic, or very naive. It’s still, I think, a good step to extrude part of the graph with different depth based on, I don’t know, colour, or meaning, or position, or something like this. So, I think it could be done. But, if you could export one of the poster in that format, in SVG, I think it would be fun to tinker with. But I think, at some point, you personally will have to consider, indeed, the question that Brandel asked. If you have a room, rather than a wall beyond the automatic extraction or extrusion, how would you design it?
Brandel Zachernuk: Yeah. It’s something that I think would be really useful as an exercise, if you want to go through one of those murals and with a sketchbook, just pencils. And at some point, you can go through with us to characterize what I think, like you said, different shapes, different jobs call for different shapes through that space. But one can move space around, which is exciting. Librarians can move their walls around.
Bob Horn: I was going to say the other, if you strike another core, just as from the demonstration we saw earlier this morning. The big mural could be on one wall. There was a written report. There is a 60 or 80-page report that could be linked in various ways to it. And it exists. And then, there’s also, in that report, there’s a simplification of the big mural. It reduces the 800 steps in the mural to about 40. And it’s a visual table look. So, already there are three views, three walls, and we’ve already imagined putting it flat on the floor and things popping up from it. All right, there we go. There’s a room for you.
Brandel Zachernuk: Exciting, yeah. I think that’s a really good start. And from my perspective, I think that’s something that I can and will play with is, starting from that JPEG of the PDF, I’ll peel pieces of that off and try to arrange them in space, thinking about some of the stuff that Fabien’s done with the Visual-Meta, virtual Visual-Meta. As well as what Adam succeeded in doing, in terms of pulling the dates off, because I think that there’s some really interesting duality of views, like multiplicity of representations that we can kind of get into, as well as being able to leverage the idea of having vastly different scales. When you have a, at Apple we call it a type matrix, but just the texts and what what’s a heading what’s a subhead. But the thing is that, except in the most egregious cases, which we sometimes do at Apple, the biggest text is no more than about five times the smallest text. But in real space you can have a museum, and the letters on the museum wall or in a big room are this big. And then you have little blocks like that thing. And there’s no expectation for there to be mutually intelligible. There’s no way you can read this, while you’re reading that. But because of the fact that we have the ability to navigate that space, we can make use of those incredibly disparate scales. And I think that’s incumbent on us to reimagine what we would do with those vastly different scales that we have available, as a result of being able to locomote through a virtual space.
Bob Horn: Well, let me know if you need any of these things. I can provide, somehow. I guess you and I could figure out how to do a dropbox for Illustrator or any other thing that can be useful for you.
Brandel Zachernuk: Yeah, thank you. I may ask for the Illustrator document. One of the things that I’ve been recently inspired by, so there’s an incredible team at Apple that I’m trying to apply for called prototyping. And one of the neat things that they have done over the years is describe their prototypic process. And it mostly involves cutting JPEGs apart and throwing them into the roughest thing possible in order to be able to answer the coarsest questions possible first. And so, I’m very much looking forward to doing something coarse ground with the expectation that we have a better sense of what it is we would want to do with more high fidelity resources. So, hopefully that will bear fruit and nobody should be, hopefully not, too distraught by misuse of the material. But I very much enjoy the idea of taking a fairly rough hand to these broad questions at first, and then, making sure that refinement is based on actual resolution, in the sense of being resolved, rather than pixel density.
Bob Horn: Yeah, well, okay. If you want JPEGs we can make JPEGs too.
Frode Hegland: You said almost as a throwaway thing there. Traverse. But one thing that I learned, Brandel, particularly with your first mural of Bob’s work is that, traversal, unless you’re physically walking if you have room scale opportunity, is horrible. But being able to pull and push is wonderful. And I think that kind of insight that we’re learning by doing is something we really should try to record. So, I’m not trying to push you into an article. But if you have a few bullets that you want to put into Twitter, or sent to me, or whatever, as in, this, in your experience has caused stomach pain, this hasn’t. Because also, yesterday, I saw a… You know I come from a visual background, and have photography friends, and do videos, and all that stuff, suddenly, a friend of mine, Keith, from some of you have met, we were in SoHo, where he put a 8k 360 camera, and it was really fun. So, I got all excited, went home, looked up a few things, and then I found the Stereo 180 cameras. And I finally found a way to view it on the Oculus. It was a bit clunky, but I did. It was an awful experience. There’s something about where you place your eye. When we saw the movie, Avatar, it was really weird that the bit that is blurry would actually be sharp as well, but somewhere else. Those kinds of effects. So, to have a stereoscopic, if it isn’t exactly right on both eyes and you’re looking at the exact, it’s horrible. So, these are the things we’re learning. And if we could put it into a more listy way, that would be great. Anyway, just since you mentioned.
Brandel Zachernuk: Yes. It’s fascinating. And that’s something that Mark Anderson also observed when he realized that, unfortunately, the Fresnel lenses that we make use of in current generation hardware means that, it’s not particularly amenable to looking with your eyes like that. You really have to be looking through the center of your headset in order to be able to get the best view. You have this sense of the periphery. But will tire anybody who tries to read stuff down there, because their eyes are going to start hurting.
Frode Hegland: Yeah. I still have problems getting a real good sharp focus. Jiggle this, jiggle that. But, hey! Early days, right? So when it comes to what we’re talking about with Bob’s mural, and the levels, and the connections, and all of that good stuff, it seems to be an incredibly useful thing to experiment with exactly these issues. What does it actually mean to explode it, et cetera? So, yeah. Very good. 
Fabien Benetou: Yeah. I imagine that being shared before. But just in case, Mike Elgier, who is, or at least who was, I’m not sure right now, but a typist and designer at Google, on the UXL product. Wrote some design principle a couple of years ago. And not all of these were his, but he illustrated it quite nicely. So, I think it’s a good summary. 
Brandel Zachernuk: Yes, I agree. He’s still at Google he was working on Earth and YouTube. Working on how to present media, and make sure that it works seamlessly so that you’re not lying about what the media is, but in terms of presenting a YouTube video in VR in a way that it isn’t with no applied and like I see it screen or whatever. But also, making sure that it’s something that you can interact with as seamlessly as possible. So, it’s nice work, and hopefully, if Google ramps up its work back into AR, VR, then they can leverage his abilities. Because they’ve lost a lot of people who are doing really interesting things. I don’t know if you saw, Don McCarthy has now moved to New York Times to work on 3D stuff there. And that’s very exciting for them. But a huge blow for Google not to have them back.
Frode Hegland: Just adding this to our little news thing. Right. Excellent. Yeah. Let’s reconvene on Monday. This is good. And, yeah. That’s all just wonderful. Have a good weekend.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *