Tr : 3 Jan 2022

Video: https://youtu.be/plN_KHTZwcY

Chat log: https://futuretextlab.info/2022/01/11/3rd-of-jan-2022-chat-log/

Rafael Nepô: [00:02:48] You guys enter into a virtual reality phase.

Frode Hegland: [00:02:53] Well, you say it’s a phase, I don’t think it is a phase and you know, you talked about costs and things. Don’t forget. Don’t forget that kind of strangely, people who don’t have that much money, they’re the ones who tend to buy the biggest TVs. That’s because they like entertainment, and there’s nothing wrong with that, it’s an it’s an effective way to get entertainment, right? Yeah. I think it’ll be something similar with VR that a lot of people will jump in. And I see now there is a bit of a tipping point, but still the issue we’ve been discussing. Well, there will be proper productivity stuff.

Alan Laidlaw: [00:03:32] Mm hmm.

Rafael Nepô: [00:03:33] I’m a complete I just want to clear up that I’m a complete fan of VR. I love it. It’s it’s incredible. And I mean, it’s something that we’ve been doing since the dawn of text, if you think about it, right? Because storytelling is virtual reality in a way where you’re listening to somebody telling a story and you go into another dimension, another world. So it’s just another interface for for seeing things. [00:04:00]

Frode Hegland: [00:04:01] Yeah, I mean, we’re waiting for Brandel and Alan, you know, to dive into the community planning thing. I’m glad you agree. But the thing that surprised me is a couple of things. Number one is this thing about you have 3D on a computer screen. That’s one thing that if you can move your head, you know, and if you can move that and it becomes something really strangely different anyway, I’ll report back when you guys get your Oculus, I’ll meet you in that land. But anyway, OK, well, we’re waiting for the rest of the guys. Let’s have a look at this.

Mark Anderson: [00:04:39] I think the other useful thing that came up just about the speed was we’re discussing the fact that having having the having something like the Oculus was less about a 3D experience, more potentially a sort of end dimensional experience in that it’s a bit naive just to think of it. Ooh, oh, you know, this is this is the [00:05:00] next revolution of sort of blocky 1980s Wolfram diagrams, but more that for the sort of things we’re talking about, it’s the ability to say, take some information and look at it on a whole number of different axes. And now you’ve got the performances of these things to work with. I mean that that I find much more interesting than just being able to play a game with the headset on, which is, I think, where a lot of people’s head is out at the moment. And that’s that’s absolutely fine. But I don’t think that’s the interesting part of it at all.

Rafael Nepô: [00:05:28] I love the idea of of VR glasses as entering a mind palace where, you know, I keep my things. So it’s a literal entryway to a mind palace. So it’s very nice.

Frode Hegland: [00:05:43] Yeah, I I agree with you. That is a very nice way of looking at it. By the way, there are some things with this setup that’s really amazing and some that are really awful. Like I’m trying to log into Google, excuse me, YouTube, Google. Same thing in a way. And you know, I have the headset [00:06:00] on and I have to do a cold here and can go to another part of the app. And just obviously, it’s something you’d only do one, but at once, that’s pretty bad. Right? So on the agenda, I met with Ward Cunningham yesterday. Everybody knows him. He’s the inventor of the wiki, right?

Mark Anderson: [00:06:16] Mm hmm.

Frode Hegland: [00:06:17] Wonderful chat. I showed him. This was not really to do with my PhD because my thesis has officially been handed in, even though they don’t have the documents. So it was not recorded, but he really likes the idea of visual meta. He really thinks it’s cool that we accept the documents still exist because he doesn’t and his work. But intellectually, he likes that. But more interestingly, the map and author, you click on something you see the lines he really likes because it isn’t messy. So that was actually his his main thing. And he would very much like to join us for a hosted session. So what I think we should do, considering the potential group of people [00:07:00] we have, we should do every two weeks because if we only plan until June six months, that’s actually quite a lot of time. And then we can during that get enough people for the next year because otherwise one year it’ll be filled up really quickly. What do you think, guys?

Rafael Nepô: [00:07:18] So until June, that would be eight people we can get eight people for six months, that’s that’s completely fine, I think. Oh, I think it’s more of a matter of logistics and getting people’s agenda. So basically, if we take January and we fill all the slots until June, we’re fine.

Frode Hegland: [00:07:40] Yes, Barbara will join us January 21st. Mm hmm. So I think we should put Ward. I don’t know. But anyway, they have said yes. Also, Richard Saul Wurman, who founded Ted. Barbara is already up there already consistently [00:08:00] because she has moved across. And then we have these other high suggestions, so like and lower, I’m going to invite her soon. I think we all agree on her, right?

Rafael Nepô: [00:08:11] Yeah, she’s been doing a great, great amount of work towards tax productivity and, you know, knowledge management area. It’s good.

Frode Hegland: [00:08:23] Oh, so that’s why that.

Rafael Nepô: [00:08:34] I would love to get about the Mango on on board. This book has been incredible and it’s it’s hundred percent about text and he’s very, very thorough, and he worked with Jorge Luis Blige’s from the, you know, the infinite library.

Frode Hegland: [00:08:53] That’s amazing.

Rafael Nepô: [00:08:55] He’s, you know, living a living person that had contact [00:09:00] with, you know, some, some amazing people and he would be a very valuable asset. I’ll try to try to find some contact information for him.

Frode Hegland: [00:09:13] Special effect for Brandel arrived. Excellent. Yeah, sorry. Yeah, and I, Rafael, if you get him, that would be amazing. I obviously have his book and read it too. So that would be really cool. Yeah, yeah. Brandel, you have to send me a friend request or whatever it’s called on.

Mark Anderson: [00:09:36] Yes, I will definitely do that. I’ll think you are cool.

Frode Hegland: [00:09:41] So now we just need everyone else to feel excluded. So they’ll also join us. We’ll do the psychological thing of pretending we’re better than them. I’m sure it’s not working, is it? No.

Rafael Nepô: [00:09:52] All right. Not working.

Frode Hegland: [00:09:55] So Brandel, I was just saying I spoke with Ward Cunningham yesterday, had [00:10:00] a really lovely chat and he would like to join us for one of the hosted things. We also have Richard Saul Wurman, who founded TED. So I think we just agreed as a group that we’re going to try to do a host of meeting every two weeks because that means June, you know, 12 people. It’s not that bad. You know, we could easily do that. But then we need to decide on in this meeting what our journal will be, what medium and so on will it be? Blog PDF or whatever? We just have to put a put a stick in the ground and just make a decision and can always change it. But we’ve got to do something.

Mark Anderson: [00:10:44] The PDF really strikes me as an act, as an output format, in other words, it’s best used for something that is broadly become static or it’s reached a, you know, an addition, you know, a point at which you want to crystallize it. And [00:11:00] in that way, it sort of reflects a boundary between the old fashioned thing was quite literally, you had to put it on a piece of paper to show somebody as opposed to the brave new world. We’re moving on. We’re arguably never has to exist in a documentary form, which I guess is where water is coming from. You know, as long as it’s always connected in, and that’s the assumption we’re sort of making.

Frode Hegland: [00:11:20] Well, well, I think he thinks that PDFs are important in the context of how we are doing it. But the thing is, let’s talk about it from an almost child perspective. What is a journal? A journal is a published work, therefore exported, therefore frozen. But that should not preclude a very interactive experience with a journal entry or the whole corpus of journals, right?

Brandel Zachernuk: [00:11:51] Yeah. I mean, there are a number of different components to hypertext in that there’s cross linkage [00:12:00] and there’s web delivery, and the web delivery aspect of the hypertext is the aspect that is problematic or challenging for for sort of durability. It would be possible to have a local archive or to be able to save a zip of a set of pages and give that to people. In terms of the way that they would be able to interact with each other, it might be somewhat odd if you wanted to give people something, you can host it as well, but also just make sure that it’s always downloadable. That sort of serves a number of purposes simultaneously. I don’t know whether solid. I mean, I haven’t spoken to him before, but I don’t exactly understand how it’s all of this supposed to be helpful, to be honest. But but it might be a mechanism that is worth considering, or if only spiritually as well. So if [00:13:00] anybody’s is familiar with what it is, that’s all it is supposed to do for the concept of durability and hosting. I’d be very interested.

Frode Hegland: [00:13:06] One of our colleagues, Mark and I, his name is Henry Henry Story. He did Solid. And along with Chris Gutteridge, who is probably the most clever person I’ve ever met with all due respect to everyone here, we don’t get it either. We really like the basic ideas, but it seems when you get to what it actually is, there’s a lot of. They’re doing it, by the way, just to show you and an author you can export to WordPress currently doesn’t work very well because some APIs changed, but it wouldn’t be that hard for me to change that to work. But let’s because I’m, you know, I’m in of an artistic personality traits, to put it that way. Let’s say that one thing we would like is to have this be relatively easily intractable in VR space. What just for [00:14:00] that aspect, because we can obviously output in many, to many different formats. So Brendan, what would you like it to be if you were going to be interacting with it in your software?

Brandel Zachernuk: [00:14:12] Uh, this is where I clashed a little bit with the rest of the web VR community, but I think that it should be an HTML in order to make sure that it works well in virtual reality. The one caveat is that in order for me to be able to do my work of rendering something to HTML, it’s necessary just to have it in the right sort of tag structure. That means that I can extract what needs to be seen and where. So beyond having some mind an expectation that it’s well-structured enough to do that and to think about the scale and the size of the pieces within it, I don’t believe that it’s worth trying to optimize for anything more than that. So, you know, most people in Web VR at this point aren’t using. Um, [00:15:00] aren’t using documents that are using 3D models. I think text should be text. I think it should be represented as such so that it has the sort of transposed ability and change ability. And to that end, I have built things that support being able to render a document or being able to take a snapshot of a document at a moment and turn that into text. That’s what’s in reality. That’s what’s in deep regret. So, yeah, I.

Frode Hegland: [00:15:31] I am just trying to say, if I can log into my WordPress server here and I’m having problems, so I’m not going to waste anybody’s time on that.

Mark Anderson: [00:15:40] Very quickly before we skip ahead to first, just to pick up the point about solid something I’ve heard said and it wasn’t it wasn’t said in any way so disparagingly, but in a way solid is sort of is trying to take another crack at the semantic web. That never quite happened. There wasn’t, you know, when people write about the semantic web, I think they genuinely thought it [00:16:00] was going to happen and clearly it has. So this is another attempt, and it probably comes more from an engineering community that is rather exasperated that the world doesn’t understand, you know, XML and, you know, just does think things like RDF, you know, is not natural to most people, and I think that upsets them. But you know, we have we are so solid is is an attempt, well, attempt to provide referencing means. But I think the sort of where it comes from was picking up the ashes of what what semantic web never quite came to be. If that helps. I mean, it does. But I’m still perplexed as to what people’s expectation of the impact it would have on on a user or community level because ultimately technologies must have that or they aren’t anything. And so when people say things like blockchain is going to revolutionize [00:17:00] Kickstarter and yet you won’t feel a thing, then you can’t have both things either something that’s going to change or it’s not going to revolutionize everything. So indeed, it’s another version of, you know. But all my friends use Product X, you know, exasperation. Yes, I I mean, I think it’s well intentioned. I think a lot of good work’s gone into it at my sort of thinking it’s going to go anywhere fast. Likely not because I think what it it, it makes no attempt to do is to wrap the human into it, into the thing. Now, if we were going to hand everything off to our robot overlords, solid probably would be good part of the jigsaw. But I don’t think we’re there yet.

Frode Hegland: [00:17:44] It’s pretty simple. Ok? Innovation will only happen if it’s needed.

Brandel Zachernuk: [00:17:50] Hmm. Ok, well, thank you. Thank you for the detour. I’m somewhat relieved and disheartened to hear that the future of tax community also doesn’t [00:18:00] know what solid does for us.

Frode Hegland: [00:18:03] I would absolutely love to understand and support solid. I mean, Tim Berners-Lee, not only Mr Famous Man, but he is one of the wisest people I’ve come across and I would love to have him active in our community. I’ve only had a chance from a few times and I was so impressed, but this just doesn’t grok to use that Americanism.

Mark Anderson: [00:18:24] Well, it’s not a case. I don’t think they’re doing anything wrong. I think it’s a classic case, but maybe you’re just not doing anything right at the same time. So, you know, it’s well thought. I don’t think there’s anything apart from people who are going to go really down in the technical ways. No. And you know, in fact, I really do think it’s semantic Web 2.0, the semantic Web 1.0 didn’t go anywhere, and the question that perhaps they aren’t asking themselves, they’re asking the wrong questions is why did the first thing fail? And well, let’s not go down the rabbit hole because it’s not our problem.

Rafael Nepô: [00:18:59] And the thing is, [00:19:00] with all of the projects that are decentralized where you have your own little server in your machine and you own all your information and everything like that is that it has to be set up in the moment you’re setting up the computer. So it has to be out of the box if it’s not out of the box. I don’t see it happening any time.

Frode Hegland: [00:19:20] So guys, a key question. And you know, I just texted Adam because he hasn’t been here for a while to make sure he’s OK. And that is, I’m not. I don’t want to give anybody work. That’s not what we do here. But let’s say, for the sake of argument, Randall, you had decided that you desperately wanted the journal to go into your world. You’ve already said you would prefer to have it as HTML. Can you tell us a little bit more? Because if it’s cheap and easy for me to modify and export from author, considering we already have to WordPress, then maybe we could do something that we could easily export to all kinds of different, useful things simultaneously. [00:20:00]

Mark Anderson: [00:20:02] Yeah. So having the simplest markup, one of the reasons why Wikipedia is such an appealing thing for me is that it one it’s a very large purpose to it’s pretty good information, but three that the representation is incredibly simple. So there’s very little that you need to know about what the CFS is doing. I mean, it’s good that there had a little one heading to stuff like that. But more importantly, it’s it’s very simple and very consistent, somewhat consistent across the articles so that you know that a paragraph is representative of approximately this amount of text and stuff like that. Whereas for better or worse, on a lot of commercial websites, those kinds of guarantees aren’t necessarily the case. Apple does a pretty good job, but like fighting to make sure that that textual representation is somewhat [00:21:00] consistent with the way that the most basic reader would want to be able to render a page is is a constant uphill battle with anything that’s a little bit more opinionated with regard to representation. So, yeah, I think within the context of that, I would also aim for and argue for simply a simpler representation because not just with VR, but with other technologies and in the future, the vision of HTML as a document format that is less opinionated about presentation than the specific browser that’s responsible for doing it. I think it was a really useful thing. And as computing changes to become a party to representing things in a more diverse array of ways, we’ll probably need to lean back on that and remind people that HTML is is a format for information and browsers are a machine for deciding what to do with.

Rafael Nepô: [00:22:00] Mm [00:22:00] hmm. Right. For the for the Me platform as well, basic text would be better for structuring it in different ways and even modularized if we need.

Mark Anderson: [00:22:14] This is mostly consistent with the idea of having a police level address ability. And in fact, if you think about leaving aside the exact letters HMO, if you have this this structured approach effectively built in, if you if you treat that as a necessary constraint, I suppose that’s one way to look at it. It frees you up from other stuff. Now if I’m working, say an author, I don’t need to see that. But if author knows that at some point, it knows that the ground truth underneath whatever you’re typing. The structure that you know, the headings you zoom in and out of are, in fact, the headings belonging to this structure, which you know at the moment we’re calling it. I think that’s the way I see it fitting together, [00:23:00] because in that sense, in many, in some aspects. Author or reader will be functioning as the browser. I know that might seem a bit odd, but effective it is because you’re saying, Oh, here’s some text. Yep, I I know what this is. This is how this is, how I present this sort of text and I read something that looks like a document doesn’t look a web page. I’m happy. That’s what I expected to see. I don’t actually really care the structures that you see as the end user. I don’t care. Those of us who care about the the interrelation and the permanence of that text do want to care about that structure, because that’s half of the thing that helps us cross-reference it. Keep it alive.

Frode Hegland: [00:23:40] Right. So a question is, I’m asking Rafael, Peter and Mark first, how do you want to consume the journal? How do you want to quote unquote read it? And what

Rafael Nepô: [00:23:53] Way? Oh, that’s that would be up to the readers, [00:24:00] but I would assume for me personally, I would go tablet based. I think reading on a tablet is incredible.

Frode Hegland: [00:24:11] Hang on, Rafael, just to really explain the question. I also mean, should we transcribe it? You know, so it’s properly done with a human, whether your friend Daniel has been very good. Should we just use the YouTube transcription? Should we leave it as video or something? What is the kind of medium that you guys personally, when you go back in a few months, you want to look back? What would you like to use?

Rafael Nepô: [00:24:35] So, for example, a lot of people love podcasts, but I cannot listen to the podcast for the life of me because it doesn’t engage me in a way that video engages me. So if it’s a recorded podcast where people are talking and there’s camera one and two jumping back and forth, that’s perfect. I love to to to watch people having conversations. [00:25:00] So audio format for me is it’s no good. I I tend not to consume audio content. Spoken audio content, right? Not only music, but personally, I would go either I would go video and text. Considering we’re going to have these recorded so.

Frode Hegland: [00:25:28] Ok, so you do you think that if we can afford it, considering it costs about $50 dollars each? If we do a human? Do you think we should do that or do you think we should use a kind of a scraping mechanism to get the raw YouTube, which doesn’t say who was talking? For your own personal consumption profile.

Rafael Nepô: [00:25:48] So for my own personal consumption, I would love to see since we’re if we do it twice, twice a month, that’s fortnightly, right? And if we have fortnightly publications connecting that with the [00:26:00] newsletter and having it in both, you know, receiving it an email but having. Having. But I like to print out stuff, right? So if it’s easy to print out a little booklet with it, with the transcription? That would be nice, but that’s me personally. I would do that even if I would do it myself. Right? But yeah, a fortnightly publication in text would be amazing with, you know, the video conversation of the meetings.

Frode Hegland: [00:26:36] What about you, Peter? How would you like to consume the because the journal, well, at least in large Part B are Zoom meetings. How would you like to consume a result?

Peter Wasilko: [00:26:50] Well, to my mind, I would like to see Typekit tech on a tablet backed up by machine readable text.

Frode Hegland: [00:26:59] Ok, [00:27:00] Mark,

Mark Anderson: [00:27:04] I think I mean, I was thinking, I mean, to say, it’s to me, it’s actually really quite a toss around word. It’s sort of context dependent. If I’m wanting to listen to part of the journal where there’s a lot of cross-referencing, I don’t, for instance, want to be listening to a conversation. I want to be looking at as some kind of a hyper textual, regardless of what the sort of browser type is, but something where I can in the moment follow all those links. Otherwise, it’s interesting what Rafael was just typing something in. But but I find I very often, yeah, I find getting traction in podcasts is difficult. What I tend to do is I do tend to listen to, I say, listen to videos, so I’m probably not watching the video. Most of the time. I flip back occasionally if there’s something being shown. I normally listen it between one and a half to two times speed because there’s too much feel otherwise, and my mind tends to wander off on to something else. So I’ll [00:28:00] listen fast, and if necessary, I’ll backtrack. If there’s something interesting I didn’t understand or if it’s signposted, we’ve now got to the bit that I made a note before that I really need to listen to.

Mark Anderson: [00:28:10] I might slow it down. But again, in fairness, I’m talking there about normally me listening to someone in my native language. I, for me, you know, another native English speaker, I’m very conscious. But but then again, if if it allows some people to listen to it more slowly because that aids aids comprehension, I think that’s a good thing. Again, if I was reading the journal, maybe not in my first language, I’m probably as important would be having some translation facilities at hand, whether provided by, you know, out of journalistic work or something else. And that’s why I say it’s really it’s sort of really quite contextual, you know, if it’s English this week, but it’s all Japanese next week, I’m going to want to interact with it in a different way because I know from the get go that I’ve got a steeper comprehension task [00:29:00] involved. I’m not sure if that’s helped, but that is not sort of the question.

Frode Hegland: [00:29:07] Yeah, I think it has actually. Brendan, how would you like to consider that?

Mark Anderson: [00:29:13] I’m in the audio video. If it’s primarily to do with the transcript or the video, then I think that that is a source of truth. I definitely do a lot of listening to videos then accompanied by a written and indexed currency of where the primary things are on an overall sort of summary of what it is that got covered in that time would be useful so that you can kind of glance and say, Well, if I need to go back to that conversation and kind of re familiarize myself with it, then I can do that. So that would be really useful as a as a representation similar to the Worldwide Developers Conference [00:30:00] videos, although it doesn’t have that sort of pricey component. They’ve got a clickable transcript. So that would be really nice. The extent to which one has control over that as a consequence of things being on YouTube, I think, is limited. But that’s not to limit people’s capacity for building sort of sidecar information that can link to codes and stuff like that as well. I’d be curious as to what kind of solutions already exists for that, but otherwise I’m pretty interested in playing with what kinds of additions and capabilities could be bolted on, either by using it as an API or as writing bookmarklet. I’ve had a lot of success in the past with just writing things to take control of the JavaScript page on YouTube for my own purposes.

Frode Hegland: [00:30:50] Ellen enters just in time to be asked the same question.

Mark Anderson: [00:30:55] Well, what’s what’s what’s up?

Frode Hegland: [00:30:58] So we are going [00:31:00] through into planning. And are you coming on video or is that your audio today?

Alan Laidlaw: [00:31:09] Audio. I’m having internet troubles.

Frode Hegland: [00:31:12] That’s fine, that’s fine. Ok. Right, so we so I talked to Ward Cunningham yesterday, you know, Wikki, guy, and he’s very happy to join us. So we have a good amount of people. So we’re considering every two weeks because that means 12 people by June. That’s I think we can do that. First of all, do you have a big comment on that?

Alan Laidlaw: [00:31:36] Do. I think it’s insane. Ambitious, but you know, that’s that’s that’s not. You know, the worst thing in the world, I think maybe a way around that would be what if we thought about this in seasons? So [00:32:00] we do the two weeks and then we say, OK, we’re going to do this for like there’s going to be a four month crunch and then we can we can assess for two months or four months you and then another season.

Frode Hegland: [00:32:16] Ok, that’s very interesting, so the discussion you just entered into was I’d asked everybody how they want to consume this quote unquote journal. And by that I meant, do you want to read it as an really refined? Somebody does a transcript for $50 a piece or just scraped YouTube text, which doesn’t even know who’s talking or audio or video, or what should it actually be? And Adam Air, Alan is thinking about an answer, Adam, and we shall ask you in a moment, by the way. Happy New Year, everyone.

Mark Anderson: [00:32:57] Happy New Year.

Frode Hegland: [00:32:59] While Alan is [00:33:00] thinking just to repeat the question, so you can also think about it, this journal thing that we are now planning today. You know, we’re going to have at least one Post-it meeting a month, maybe two. But the thing that comes out of it? How would you like to consume it as just video or as a transcript? And if it’s a transcript and what form, et cetera? So that’s what Alan is going to tell us now and then, hopefully you.

Alan Laidlaw: [00:33:27] Here’s. Uh, oh, gosh. Ok. Let’s take a step back, and rather than think about what we’re trying to make, let’s think about let’s make assumptions about our audience.

Frode Hegland: [00:33:42] And but, Alan, before we do that, can we just talk about us as individuals? So your own personal preference?

Alan Laidlaw: [00:33:51] All right. So my personal preference is that I’m pretty swamped and [00:34:00] I. I don’t know that I would. I think a consumable would wind up collecting on my wall, right? My bookshelf and whatever form that, you know, whatever metaphor you like for that. So a a passive kind of resource that’s always there. A Wikipedia of sorts is. I don’t know, like since I’m on the spot, I’m trying to think of what are the opposites of deliverables, what are the opposite of consumables? You know? And on the flip side, you’ve got something that’s super dynamic like Discord or circle or Slack. And I am like getting more and more allergic to those because that just adds to the overwhelm, right? So. That’s [00:35:00] kind of the angle that I’ve tried to take, it is like when I say take the audience into consideration, I mean me, but. It I don’t want this to be like something that you put a bow on. Can give it away, but I don’t know. That’s an interesting question, I think the newsletter made sense. In the kind of low key here, if you want it. Uh, yeah, that’s all I got for the moment.

Frode Hegland: [00:35:42] Just to add to that, before we go to Adam, when I met with Ward Cunningham yesterday, who really wants to come in and talk to us and all of that good stuff. He also hosts wiki meetings. And so the whole idea you mentioned recently, I about kind of us reporting on other things, [00:36:00] really? Yeah, like that idea that communities reporting on other oh great, just by the communities.

Alan Laidlaw: [00:36:07] So I have more thoughts on that as we get to it.

Frode Hegland: [00:36:13] Ok, so Adam, how do you want to consume the record of our interactions?

Adam Wern: [00:36:20] First, I don’t like the word consume. But that word makes me a bit allergic because it feels like something you eat or and it passes through you, but I want to I want to in a way interact and dance with the ideas, really be immersed in them and consume is the wrong metaphor for that. But I say where the word content. I don’t like the word content either. I want works and not content, because yeah, content is it’s a low bar to be content [00:37:00] nowadays. So I rather. Not confidence with the works, but to be a realistic. When I watched talks or read books, a very low percentage is really, really meaningful or useful, but it’s very useful those parts. So I want to find the the few ideas or sentences that that you can. Keep and think more about it at all candor, and sometimes it’s not even in the talk, but in the discussion afterwards or between the lines or your own interaction with the ideas that I have to make it to make sense of it. So I don’t know if I’m answering the question, but. And. You are most of the value comes from from the [00:38:00] from the interaction with the material, not just watching or reading a newsletter, but actually thinking about it and discussing it and taking it further. For example, for the future textbook, there are so many ideas there that I feel are kernels or embryos for ideas or for discussion that most of the value comes from actually discussing those ideas, not from just reading a formulation of the problem.

Mark Anderson: [00:38:30] We must also point.

Adam Wern: [00:38:33] So I don’t know what that leads or what to, but that’s my.

Frode Hegland: [00:38:40] I think that’s that’s wonderful. Adam, I say Rafael has his hand up, but just really briefly, Rafael. So what we have automatically, we have this video stream, you know, whatever direction we want. That is a thing that we have, right? So that’s great. That exists. It’s a channel that we’re probably going to rename it. We’ll talk about that today. But I think [00:39:00] we’re all saying here that this idea of a professor sitting down with a sweater like, I got this from Edgar, by the way, call a sweater in the world. But you know, sitting down with candlelight and reading through the pages of our discourse is just not going to happen. Nobody has time for that anymore. So what I think Alan and Adam really highlighted, and I think everybody agrees we need. Oh, that’s interesting. You know, we need a way to to have that thinking space. So we need to think beyond everything. I think that’s really, really fantastic. Ok, Rafael, sorry.

Rafael Nepô: [00:39:34] Yes. When Adam was talking, it reminded me of what David Lesbo did for the future of text book, where he took every piece and, you know, he kind of scrutinized it. So select, you know what? Ok.

Frode Hegland: [00:39:49] Well, Rafael. We lost out.

Alan Laidlaw: [00:39:54] Can you repeat? Still can’t hear you.

Mark Anderson: [00:40:02] You [00:40:00] I still haven’t

Adam Wern: [00:40:08] I heard some static Rafael, so maybe it’s my microphone that actually

Rafael Nepô: [00:40:13] I think the battery on the microphone died. Can you hear me now?

Alan Laidlaw: [00:40:17] Yes.

Rafael Nepô: [00:40:18] Ok. So hearing Madam Speaker reminded me of what David Lebel did for the future of text book where he took. He basically took the matter information of each of the texts and made it so it’s visible. So if we have, you know, I talk with Richard Solomon, I want to know who he mentioned, you know, any dates or any kind of technical words or any kind of specifics, because that way I can have an overview of the conversation without having to digest the complete conversation. If I find the tags interesting, then I go in and then I do a deep dive and consume everything. But I would love to see the content, you know, remove [00:41:00] the, you know, the main pieces.

Alan Laidlaw: [00:41:04] That’s a comment on that. Unless there’s somebody else speaking, that’s a. A great point. In fact. Like, what that made me think is, hey, that was one thing I wanted to do for future of text as well. I would have had no idea that he was going to do that. Oh, I wish that I could have known that he was doing that, and maybe I could have helped him out. Right? So that makes me think maybe there are some embryo’s approaches that have cropped up in prior texts say, Hey, this is a pattern that we like. Let’s expand on this particular thing, you know, like the sound bites, but maybe there’s. Maybe there are other [00:42:00] approaches as well that we could sort of. They say this is a pattern who wants to be a part of this pattern, who wants to help make this pattern happen? Then it’s easier to think about and create

Frode Hegland: [00:42:12] That we can really, really, really high level, which is really cool. But this meeting today, we have to nail down a few practicalities. And also, I just posted in the text chat, Zaatar Gqom Esther Connection, they do some of this Zoom stuff, but I do think that it’s kind of annoying that we’re not all on the same platform because the thing about the Zoom transcript, sorry. Video It doesn’t somehow computationally say who we’re speaking one, because that would be really, really useful. We’ve gone through different levels of different software trying to extract it, but you have to teach it. It just doesn’t work in all of that stuff. So sorry, I just got a message that wasn’t been interrupted. So I’m just [00:43:00] thinking. Ok. Just this is me doing a detour against what I’ve said, you shouldn’t do, so I’ll try to be quick. Imagine if we had an app on a computer, if it’s a web app, fine doesn’t matter, but it has that key thing we talked about. You can press buttons to say when something is interesting or tag it live. Right, and if that knows what the community is, then all of those are compiled into a PDF, html virtually, it doesn’t matter. It could be anything at that point, but that means that we’re beginning to get close to this thing of our journal is. Here’s the video. That’s the core truth, so to speak. And here is our list of the bits that we thought were interesting or not, and you can just click through Brandel highlighted something Peter thought something else was boring, whatever it might be. Isn’t that a basic kind of thing we should attempt to somehow try to make for ourselves?

Alan Laidlaw: [00:43:59] I’m getting the context [00:44:00] more of the talk since I jumped in late, starting to hone in on a little bit more on what you’re after. But. Uh, somebody else can say anything.

Frode Hegland: [00:44:13] I think Mark has a hand up.

Mark Anderson: [00:44:15] I just pick up the point that it’s interesting that I said we’re not on the same platform and I was just thinking about to of us saying, you know, I’m almost over discord and whatever the best word is, you know, I’m the same because basically. Although they work at distances about effectively being in the same room, and if you’re not in the same room, it it’s not, it’s not so useful. And also, it’s one more thing that the occasional user actually has a massive burden to use. And yet really, what we’re talking about is address ability. So the key thing here is the structure of what we do not, not necessarily the means of consumption, because I think the other thing [00:45:00] that you know and unprompted said on arrival was, well, we need to think about actually how in effect other people use it. And I know in a way we’re a useful proxy to start with. But but it is also useful to say just think, how does somebody who has no immediate drive to use us has no immediate requirement to use it? How would they how would they use to get stuff from it? And the thing that I hear coming out from all angles is effectively structure for interaction. So if Adam just wants to dance through the data, that’s cool. If the structure is there so it can be, it might be more useful to think about it in terms of the structure and how the suspension mark the data structure. So rather than thinking of it in terms of an idealized output, it’s what are what are the things that we need to record and how do we how do we put them together in such a way that the consumer, [00:46:00] the browsing device, medium context can get what the person needs out of it? I think that’s I think that’s a an easier and deeper a more useful for the long term way to address the question we’re asking.

Frode Hegland: [00:46:16] Well, I think no, no, no, not the video back off.

Mark Anderson: [00:46:20] All right.

Frode Hegland: [00:46:21] Yes. I’m just kidding. Sorry. So you have a response to Marc, right?

Mark Anderson: [00:46:27] So yes, I had.

Frode Hegland: [00:46:28] Yeah. If I can’t see you.

Alan Laidlaw: [00:46:30] Yeah. No, no, no, no. Let me grab my glasses anyway.

Frode Hegland: [00:46:34] Oh, wait.

Alan Laidlaw: [00:46:35] Just to grab these glasses, I’m going to make it. I’m going to make it real. Dark Web. All right. I have the solution.

Mark Anderson: [00:46:46] Ok.

Alan Laidlaw: [00:46:48] So first off, highly likely that we’re overthinking this, right? Because that’s what we tend to do. And I think the key word is embryo, [00:47:00] oddly enough. And what it reminds me of is, I know, a believer magazine, but a lot of times in older publications and ephemeris whatever, you’d have this nice front piece of of a chapter, you know, and here’s everything that’s discussed in this chapter right before you jump in. That’s pretty much all that we need. Like, I think, hey, no one’s going to go through and watch us talk, right, but if we say here are the things that are discussed and we kind of turn it into either a nice little narrative of like this was brought here and that led to this or there are links within that paragraph, you know, but I’m talking about like two or three paragraphs, because first of all, that’s easy to put together. We could even go back to previous ones and talk about that. Then we can link to whatever references, right? But we can’t expect the audience to go through and do a deep dive. What’s far more pleasurable is what came out of this. Oh, these are interesting. I see the bounce back and forth, [00:48:00] and now it feels like I read a little a simple little novel and I can move on with my day.

Frode Hegland: [00:48:05] Ok, so I have a question then, because I don’t I don’t like that at all. But for for two reasons. One, it would mean work. Outside of this time, and to either I’m going to have to do it, you’re going to have to do it or we have to pay someone. Because it’s, you know, a little bit of thinking work, it’s not just so. That the other point is, I do love it because if it can be made to happen, of course it’s fantastic, but those two barriers are quite strong for me. But if we decide that, for instance, I’ve used a friend of Raphael’s for some transcripts and he’s been very good, he’s very reasonable and he’s clever. So he understands our field. And if we tell him to do his transcript where it doesn’t have to do interpretations. But you know all of this? Hello, where are you? [00:49:00] We’re in blah blah blah. And all of that didn’t just don’t write it down, but he can still write quite a verbose transcript. But leaving out a lot of that stuff, I would be OK to try to find funding for that. Let’s say, if we find a way to, you know, for the one hour, fifty dollars or something, which we should probably do that. But I think that if anybody in here in this group say, we’re going to try to do that, we may disagree on what was important once we start editorializing. And I think therefore the editorializing should be one level of bar. So that’s why I like the idea of it, not even necessarily enough, but something like we can all point because Peter has been going on and on about high resolution addressing because it is probably the most important thing

Mark Anderson: [00:49:44] Is that

Frode Hegland: [00:49:46] This is good. This is bad. This is good.

Alan Laidlaw: [00:49:49] It’s more work and it’s less consistent, and so the thing about content or whatever, if it’s not going to be consistent, you know, if it starts out a flurry and then it kind [00:50:00] of drops off some, I’m just I’m not going to go off my radar. I’m no longer going to trust it, you know? So that’s that’s where patterns are useful. I think if you want to do the transcript. That’s great. I think that’s a lot of work, but someone can be paid to do that. The then and then a more abstracted layer, even just from the transcript, which you know, I’m thinking of Gong, which is a software that I use a lot that is similar to Zoom or it sinks in with Zoom, but it does an automatic transcript of everything was spoken. It gives you a. A timetable of who is speaking when and then it categorizes what each person is talking about.

Frode Hegland: [00:50:44] Why are we not using that software, Alan?

Alan Laidlaw: [00:50:47] It’s super expensive, and it’s designed for. Meetings with the clients, it’s really cool, though. I mean, they’re really on top of it. If [00:51:00] I could, if I was allowed to give you a tour, I totally would. Yeah, there you go. So you

Mark Anderson: [00:51:10] Face pretty.

Alan Laidlaw: [00:51:13] Yeah. Yeah. The point is is that while you have the transcript, what they’ve done is they’ve done tags, they’ve done auto tagging now. You know, of course, like Otter does that as well. What I’m talking about. I think. Takes less work than the involvement of the thumbs up, thumbs down, right? And is less of a big deal. It could be the kind of thing that could easily be created by one of us or two of us. Submitted to the group for edits, you know, and then agreed to a final form, and then that’s just like the front piece. And if you want to go watch the video or read the full transcript, you can, [00:52:00] right? But but in the meantime, you get a nice hey, here’s here’s the things that were discussed, you know? And I think that’s at least. When that’s done, we can put that episode to bed, you know, and if we want to go in and explore it further and, you know, extract an inner twinkle, we can’t quite dig it.

Frode Hegland: [00:52:24] Here’s my thesis. It’s one hundred and fifty four thousand four hundred and twenty four words, including Appendix. And I wrote everything in one document. Mark told me he had one chapter per document, which I know people who write books do too, so that’s completely legitimate, but I’m talking about it because of the interaction. So for instance, right here I come across Hamilton. Oh, no, Hamilton again, right? But I want to know where it’s mentioned. So I do command and I can see all the mentions of Hamilton in my entire thesis. I think that kind of everything interaction is so crucial. You [00:53:00] know, here is British home. It won’t be as many. And then I go back out of it. That’s the kind of thing we need to aim for for our record. That’s, you know, even though I’m all about PDF is a save mechanism. How the heck are we going to be able to every time Randall talked about VR and Marc Andreessen also, did you know we need to get to a point where that’s the kind of question that can be answered, right? So that’s why it’s a yes, the whole idea of having some kind of a summary

Alan Laidlaw: [00:53:32] And obvious answer admitted visual matter, right? Like we.

Frode Hegland: [00:53:39] Yeah, but the thing is, OK, first of all, really important question, what should we call this when I say and I want to upload this today, I want it to have a name we’re going to love over the next year. What is this? Is it called dancing with text as it called text coffeehouse? Is it called something entirely different? What do you guys want it to be called?

Mark Anderson: [00:54:03] My [00:54:00] my wife reminded me that discussion group, while not being an exceptionally sort of vivacious name, is accurate and reasonable for a number of other contexts as well. So to that end up being recognizable, calling it like a text discussion group is not unreasonable, but I’m also open to that challenge.

Frode Hegland: [00:54:30] If we go with that. Not that, would it be a discussion group for future tax discussions or just tax discussion?

Rafael Nepô: [00:54:39] The state of Texas?

Frode Hegland: [00:54:41] Now we want the picture, though, the state of Texas, the state of Texas. I actually really like future tax discussions. What do you think guys like?

Mark Anderson: [00:54:59] Yes, [00:55:00] I think it picks up exactly what I was getting at. And it’s and put bluntly, it’s not too pretentious because there are so many things, you know? No, no, no. But you know, we in an attempt in an attempt to be appealing and so about the crowds, what also happens often. And you look back, I think God, did we really? Is that really what we thought was the best way to describe what we’re doing, which is something one at the same time, more humble but much more interesting than than the label we put on it, Zanna.

Rafael Nepô: [00:55:32] Yeah, I like I like the generic descriptive. I think they it works nicely and people are you don’t have to explain if we call it, for example, the library. Well, what is the library, you know? Well, the library is a meeting. We go, you know, every week, every two weeks. So if it’s descriptive, people get it fast and it’s easy to share.

Frode Hegland: [00:55:52] The first one was August 2020. Wow. I don’t know why this [00:56:00] is in there, that’s a music thing that’s always slip in school. I try to find a way to get that off it.

Mark Anderson: [00:56:06] Electronic remix of the Orange Is the New Black there, I think. Yes. All right. Thank you.

Frode Hegland: [00:56:13] So we tried a few. Know I put them by one to three. I’ll probably rename them by dates. Make myself a screenshot. But OK. Future tax discussions. And that’s what I’ll call the video. So that’s step one we’ve agreed on, right? She. Yes.

Mark Anderson: [00:56:38] What about discussions on the future of text? I mean, they are very small points, but. Like future tax discussion sounds like it could possibly the discussion might actually also refer to the to the to the future, rather than simply to the tax

Rafael Nepô: [00:57:00] Discussions [00:57:00] on the future of text.

Frode Hegland: [00:57:08] Tell me what other way you can interpret it again, please.

Mark Anderson: [00:57:14] Um, if future tax discussions could mean that they are discussions of text in the future or that it is a somehow a future text

Frode Hegland: [00:57:28] About future of tax discussions because all the other stuff is the future of text.

Mark Anderson: [00:57:34] Well, that still can mean that it’s the it’s the future of tax discussions could be still discussing the the. It could be about the future of discussing text. So it’s the discussions on the future of text is the least ambiguous information.

Frode Hegland: [00:57:53] But that sounds clunky. What about a little quote marks around it like this?

Mark Anderson: [00:57:59] Yes, [00:58:00] that does the same job as in writing the the the place that discussions goes. I don’t have a huge preference. I mean, I lean towards you. I’m not. I’d be happy to put it honestly, because those things are qualitative. It could actually just be to do with whatever other pattern matching my mind is doing on it. So I’m happy to actually see that go. Well, I

Frode Hegland: [00:58:27] Think you to do it this way. So there so we need and wherever we all happy with discussions on the future of text. Well, this is kind of nice, by the way. Seven more, we’ve got hundred. Um, actually, because this one is shouldn’t be there.

Alan Laidlaw: [00:58:50] Randall’s point is valid, and I agree, but I also think your point about the clunkiness is kind of important too, because there is a kind [00:59:00] of a, you know, cognitive placeholder that this this holds in people’s minds and it’s easy for those things to get lost. So like, for instance, a lot of things get abbreviated and you just wind up saying. Right? Future text, right? So. Maybe the. Maybe there’s a way to have both, but still have, like the future of text discussions is nice because it shortens to FTD.

Frode Hegland: [00:59:30] But I think that’s a good point, Alan, because also with Ismail, I’ll be doing these interviews. So we’re going to call that the future of text interviews. The Future of Tech Symposium. The Future of Text, Vol. one to three. So if we lead with that, but as long as we have the quote marks, it’s OK, right, Randall? Ok, cool. Right, so back on this. A few more questions that are really practical. Should we invite Howard Rheingold? Considering he wrote [01:00:00] the book Tools for Thought.

Mark Anderson: [01:00:04] I’d be happy to have him. I’ve really enjoyed his discussions with Andy Clarke and read tools without recently, and I very much enjoyed it. So a link to that is, yes, sure.

Alan Laidlaw: [01:00:21] No, I mean, like so I know the book, but what what was the thing that you enjoyed?

Frode Hegland: [01:00:26] Recently, I like you said recently

Mark Anderson: [01:00:31] I found him actually because he was one of the few people who interviewed and had a discussion with Andy Clarke on Extended Mind. Many years ago. Ok.

Alan Laidlaw: [01:00:45] I know Steven Johnson mentioned him, and he’s a favorite. Steven Johnson might be another one to reach out to, actually.

Frode Hegland: [01:00:52] Okay, well, let’s go through this list briefly then, because you know, we’re a little bit over time. Right. So everybody [01:01:00] agrees. Howard Rheingold Yes. Right? Anybody say no? Yes. We’re going to invite him. Yes and bite. I’ll do that. Ward Cunningham and Richard Solomon were happy with who else from this list should we invite? I’m going to try and lower again. Oops! Uh, anybody here want one invite somebody specific?

Rafael Nepô: [01:01:31] Yeah, I try to contact Alberto Munga.

Frode Hegland: [01:01:35] Do you have a link to this? Can you add his name as well?

Rafael Nepô: [01:01:38] Uh, could you send it on, Chad, please?

Frode Hegland: [01:01:42] They can try.

Alan Laidlaw: [01:01:51] Oh, who added, Andrew Hinton, I know him.

Frode Hegland: [01:01:57] I have. Oh, that was your thing.

Alan Laidlaw: [01:01:59] I’m actually [01:02:00] sort of sort of friends with him. I can reach out to him. Ok, great book. Can I add that?

Frode Hegland: [01:02:11] Not Chair Alan Kay is out of commission for a while, he’s recovering from something. Ok, good, so that means that we have quite a few people. As you know, once we invite these with the ones we have here, we should be able to do 12 by June or one per month by December. So then we need to go back to this discussion of how we’re going to record this. I honestly think that we agree. I honestly think I hate that expression. I apologize. I try to be honest with you all the time. There will be snippets of stuff sent back and forth, and they should be able to go back to once they came from right. And also other dimensions. So we all agree on that, but we need to decide on this. Very first [01:03:00] call has to go into this in some rudimentary form. All right, I need to step away just for one second, but let’s just think of this very first call. How are we going to store it and disseminate it one second?

Rafael Nepô: [01:03:20] He means not just publishing on YouTube.

Mark Anderson: [01:03:27] Yeah. I mean, right now, YouTube has the. I’m not familiar with whether there are more artifacts that are recoverable from the Zoom, having never recorded something onto myself. Maybe I should try that sometime to see what one gets out of it. But it’s it’s a shame that people need to go to page solutions for the who’s speaking aspect of it, because just thinking about what is actually available to the Zoom platform, that’s trivially recoverable. I can. The thing [01:04:00] that I did with being able to record the screen of a computer and put it on another computer was actually in support of being able to recover that data, which I’m confident I should be able to have done in half a day, which I’m going to try today. But other than

Alan Laidlaw: [01:04:15] That? Sorry, go ahead.

Mark Anderson: [01:04:18] But with with that, in combination with the transcript that YouTube provides, I think that we have some pretty useful artifacts, not necessarily the ones that are sort of user serviceable, but it’s a good start. Yeah. Yeah.

Alan Laidlaw: [01:04:37] I think the bigger thing is the is the editing. Lot of the noise, right, like the. I keep coming back to the decreases of compression, meaningful compression, because no one has time to go through all of this and and and. Maybe [01:05:00] that’s a longer process where we keep nursing it the way we do this Google Sheet, you know, it maybe doesn’t have to happen all at once, but that compression is is the I mean, if you’re talking about something that you want to last for a long time, it’s it’s that, you know, we remember the phrases of of Socrates than than the entire transcripts.

Mark Anderson: [01:05:31] Yeah. And also, it might take us half an hour to get to the bottom of something to which there is then an answer, which is the intractable fact. Now, if in 100 years time some doctoral student wants to sit through hours as our conversation, they’ll still be there. But you’re absolutely right. Most many of the people would probably imagine consuming this much better word with a positive attitude and, you know, haven’t got the time. And [01:06:00] if they had the time, they probably be here in person and they’re not so well.

Rafael Nepô: [01:06:05] The real value add

Mark Anderson: [01:06:07] Is actually as well as the long term repository, which we can’t assume anyone will necessarily read in this raw form is actually the sort of the extracted scheme of understandings that we come to. And cross-references be the tools we lay to other groups or effectively new understandings that we think we’ve established, right?

Frode Hegland: [01:06:31] So it’s quite clear that nobody is going to be listening to this conversation unless they are an historian and a thousand years. So that is this is an example of interesting for us because it’s framing the community and stuff, but completely boring for everyone else, right? And that’s not a bad thing. You know, it helps our conversation. So the thing about. Ok. Let’s imagine we can build a database, let’s think [01:07:00] a little bit old fashioned, let’s imagine that endless database goes the video and the chat transcript was is hugely important because links and things go in there and the normal transcript, then you know, if we have such a thing, if we just decide on that, how we output it, we can output in several different ways, writes. But one so the dimension that’s so good. So the dimensions we need time we have. We also need identification of speakers. It’s so important and software right now unless you train it and spend a lot of time isn’t good enough. So should we just try to find a way to make a web app that does the thing we talked about records all our own perspectives first. It lets us click a button. This is interesting. It lets us add highlights with stuff and just put that into some server somewhere. And then we export it as PDF and HTML and all kinds of good stuff. So we just figure out a way to do that because [01:08:00] we don’t actually have a playing space where we should put our data. Yet all we talked about is, OK,

Alan Laidlaw: [01:08:05] Yes, and let’s let’s figure out a way to do that. And I think maybe what’s going on here is actually we’re talking about the need for man hours. We need is a post-production team. Right. Because none of this stuff magically happens. Right. And and I can’t like it can go. It can. It can make it easier. Augmented, you know, but it’s going to be wrong in some ways. So we need to talk about the value of where to where to focus on a post production team that the the the urgency of that, you know, how does that happen right after the video comes out? You know, I don’t personally think so. But like I think the video can come out and then it can go through stages of compression and post-production. But I can I can I mention a slight Peter [01:09:00] before you go, can I mention a slight twist of topic? Which is this? I’m going to go out and do it. Great job on Digital Planet. And you froze. Or no, you just weren’t moving. Ok. And it reminded me that. Our purpose here is one navel gazing, kitsch and fun times, friends, but it’s also kind of saying, Hey, there’s this future that everybody talks about, that includes climate change and equality and all that sort of stuff. And what we’re trying to do is inch into that picture of the future. The general idea of the future that the public has this place for text. Right. And to think about text in this new way, and it doesn’t have to be a specific solution. Visual media is one of those parts of it and it’s a very good part. But mainly we want to wedge [01:10:00] in our our our union unconscious what the future is to include this kind of idea for text. And so we can get future thinkers. Cory Doctorow is another one. I just dropped on there that I think would be fantastic, who are generally interested in the future. But we’re saying, let’s fit the text into that and changes the narrative slightly, right? I think that is closer to our full purpose in why we’re here.

Frode Hegland: [01:10:28] I think that’s fair enough if you want to. But Corey, that would be great. Can I’m going to give the mic to Peter now, but can we all agree that whoever we feel from this list we want to invite, we will try to do it today. So we have responses by Friday. It’s that call everyone who wants to invite someone. Well, it’s all nodding. Ok? Peter.

Peter Wasilko: [01:10:48] Yes, in the fact that I posted the text of a newsletter posting and he’s starting to move into looking at natural language processing, and he’s prepared a guide to all the [01:11:00] resources we found so that links in this Typekit. Also, it occurs to me that the Zoom app itself knows exactly who who’s talking when it’s opening the microphone and running the text. So maybe what we should do is ask the Zoom team if they could provide a feature to simply dump a data file. Let’s think Timestamp and would person’s mic is open at a given time.

Alan Laidlaw: [01:11:27] Yeah, I mean, going on, yeah. Yeah. It’s in the metadata.

Frode Hegland: [01:11:34] Is it where I just had a look?

Alan Laidlaw: [01:11:40] Um, I mean, it’s got it’s got to be in the channel metadata. Hell, Twilio VIDEO Does that as well?

Mark Anderson: [01:11:53] I didn’t need it may not be exposed. It may not be exposed to the user or not with any ease at the moment, but it is a toilet. [01:12:00] I’m sorry. No, I mean, it’s insane. Now I take your point.

Alan Laidlaw: [01:12:04] But I know. But I know it’s got to be. I’m assuming it’s in the metadata because because God doesn’t make guesses about who’s speaking when it has a track of the members that have joined the group and it has them all listed down and whenever they speak, even if they go up, it pops up as a blip.

Mark Anderson: [01:12:25] It may well be that Zoom’s left that as an aftermarket product.

Frode Hegland: [01:12:29] Well, let’s put it a different way. Let’s say that we decide here now are normal calls will not be anything other than video recorded. That’s it. We’ll keep doing what we’re doing for those the ones that we have. A guest host will spend a bit of money having a transcription. We’ll figure out a way to finance that if we do that. What format should the test be? Text B. We can have several formats, so I guess if we get it as Microsoft Word, which is a neutral thing, we can just copy stuff out of. Let’s [01:13:00] think it’s going to be a human doing it. Most likely it’ll be Danilo, who has been very good for us. Thank you, Rafael. He can deliver it in many different ways. How does Adam want it and how does Brandel want it? You know, how do we want it? We who may do stuff with it before we give it to the world. But what could be easy for a human to give us the thing we want?

Mark Anderson: [01:13:26] Well, as long as to take your point, if it was a word document, it’s almost a word document is capable of either through whatever word document, anything you have or some other process, for instance, producing the type of structured data that Randall and talking about using. Then it’s fine. So I turn it round into the constraint what does what does the word or whatever format top sort of start level document need to address from the get go? [01:14:00] So in other words, you don’t want to spend hours making a document and then find you’ve got to go back and tag it with some kind of headings or something, because those are those we did actually know. We did it.

Frode Hegland: [01:14:10] Mark, what what headings be about having having a human who is not right now in this group you may join? That would be lovely. There are some really good people on our periphery, but it basically. You know, OK, he could take us by person. That’s what he’s done already. He did the future of text day one and day two, our annual symposium. But, you know, if he does that, so let’s take Randall you as a case study person, this person will probably get a mark. He is currently using windows. He’s writing it and word now, but what would benefit you so that you have to do almost no work and it’s useful for you? I guess one of the things would be at some point in the video, you’ll have to have YouTube links with time marker. Not every single person speaking, but something, right? What else? What do you want?

Alan Laidlaw: [01:14:59] I want a mix of what [01:15:00] Adam made. Right, something close to the two interfaces that you design, Adam and Adam, design mix between that and reader.

Frode Hegland: [01:15:14] What about for your world, Brandel?

Mark Anderson: [01:15:19] Uh, I’m I’m used to scraping and processing, so from the perspective of being able to generate novel and interesting views, the things that I would make are what would be useful as the speaker. I do hope that it would be pretty easy to recover that quickly. And I grabbed a small sample of today’s discussion in order to try to build something that will be able to kind of cover that for subsequent sessions like this. But yeah, I I. Nothing jumps out at me at this point that I’m aware of other [01:16:00] than to make the because the YouTube transcript is actually excellent. It’s just that the real estate that’s devoted to it is to baseball. So having some kind of more humane representation of that text that tells you more about what it means would be very useful. So taking one of the things that I would be interested in doing is, for example, taking the time codes and maybe looking at their only discontinuities in that such that you can identify that this piece of text being after that person suggests that it’s a different speaker or just that. It’s a change in topic because a lot of the time changes in topic will be the somewhat larger gaps within the conversation. So, yeah, I’m mostly thinking about it in that sort of data representational sense and without having a particular application. I’m not I’m not aware of the things that I need to be able to build that, but I’m going to start playing with it.

Frode Hegland: [01:17:00] Ok, [01:17:00] I say Peter and Alan have their hands up, but just really quickly, do you have a way, Brandel, where you can extract the the YouTube transcripts?

Mark Anderson: [01:17:11] Oh, yeah, yeah, no, that’s easy. I built the thing that pulls out a generator, Wardle, a bookmarklet that I built. I’ll find the tree. And it was it was fun. So, yeah, it’s it’s it’s just a matter of finding the HTML element that contains it and then effectively doing a copy paste of it, so you can do that with yours.

Frode Hegland: [01:17:38] So let me let me

Mark Anderson: [01:17:39] Know you can also do it in code.

Frode Hegland: [01:17:42] Ok, so before just to finish with you, Brandel, could you help put together a thing that gives a transcriber human? An easy way to add that stuff doesn’t to add our names. For instance, he doesn’t have to type everything, but he [01:18:00] can just click, click, click, click these other people because that would reduce his workload and our money load.

Mark Anderson: [01:18:05] Yeah, absolutely. Yeah. No, that that would be pretty straightforward. You might need to register individual speakers. And then once you have those as a pool of people, then you’d just be able to click for the discontinuities, scroll through and see where different people come in and say, OK, from here on out, it’s Rafael. From here on out, it’s Adam, that kind of thing. So those things are manifestly possible.

Frode Hegland: [01:18:30] Ok. Let’s keep talking about that, but I’ve been a bit of a bully, Peter.

Peter Wasilko: [01:18:37] Yeah, I suggested Jason, as a possible metadata format, deals can be a time index speaker topic responding to and the time stamp from the thing that they’re responding to. That would suggest that the user interface for the user should have a button that he’d click when [01:19:00] he hears something that he plans to respond to. Those sort of like a queue for response button when you click the response button, and that would let his app know that the next statement he makes should be linked back to the timestamp when he first hit that button. Then, when he hits the button, it would send the data through. I saw that there were a couple of projects in the npm ecosystem that we’re looking at, replicating the functionality of Zoom and Discord, using open source electron based tools. So the libraries to do all of the work and creating a custom plan are already out there. It’s just a matter of figuring out which libraries best and walking through one of the tutorials. Here’s how to replicate behavior of Zoom using npm. There are a whole bunch of them out there, so it is doable.

Peter Wasilko: [01:19:56] And again, if it’s set up so that each person [01:20:00] is pushing those buttons as they go, we don’t have to have the grad student being paid. The data will already be there, but we want it to be generating the data on the fly. The other thing is that we want to have some sort of a representation structure to describe the overall conversation. So there are already anthologies in place for people in the linguistic community that did that. And there are also a couple of tools I saw a long time back. One of the O’Reilly books talks about them. I have to go poking through my library to try to find where that was, but they had some tools for marking up transcripts after the fact, and those were geared more for grad students who are going through a transcription and meeting notes from interviews. They inject tags and appropriate notation, so we might be able to find some existing tooling that we could leverage in that regard. That’s all that comes to mind at the moment.

Frode Hegland: [01:20:52] Ok. Right. Ok. There’s a lot of stuff in there, but it’s about how to get it [01:21:00] into that format. So I think definitely those who can should keep looking into that. Alan, I’m sure you have something. Yeah, related.

Alan Laidlaw: [01:21:14] Well. This is this is not my product manager. This is the opposite. But what we’re talking about, certainly in the format of metadata, reminded me of something that. I still think would be possible and wonderful, and that would be. And this might be the great test bed for it, and I’d be happy to put in the work to help make it essentially. It should be possible with the right kind of mark down to say, here’s the metadata form, here’s the transcript form hit another button. I see it as a blog hit another button. I see it as a slide show. You know where it hit another button, and I see [01:22:00] the video with time stamps. And so it’s all the same data, but I’m easily given these different views. Hit another button. I see it as a text conversation that I can view on my phone, right? That’s that’s not for the the here and now, but maybe we can make steps towards that, and I think it would be pretty fascinating. Also what we’re talking about here as it gets more wild. Going back to the idea of rolling in the other brother, sister kind of groups. Future of work. Future of programming. Speculative future. Whatever. If we make something like this or have a have a kind of workable experimental pattern. We have a real possibility for maybe patron or or raising money with these other groups. You know, they they [01:23:00] use a similar pattern or they or they they do some version of it. And then and then all of them are consumable in a similar way.

Frode Hegland: [01:23:09] No question this could be an amazing product. Absolutely agree with you. Mark, I see your hand up. But just briefly, guys, I clicked on Randall’s link to the mortal generator, which is just makes me sick with with excitement of what it could be. So this meeting that we have today? I don’t mind paying for a transcript. I think it’s really a very good investment, but I think we need to know exactly why and how. There’s a few things we’ve decided today. We decided on the name of this community. So that is the kind of thing that will be cited by someone, probably. You know, 80, 90 percent of today has been woeful in the sense of someone in the future, it’s been important waffle for us, but it’s been woeful. There’s not going to be cited. So, Adam, are you also sorry, Mark, I’m just a few [01:24:00] specific things. Are you also thinking about putting this into a system that you use? Do you have requirements for how this should be output?

Adam Wern: [01:24:10] I not really personally, I just want a few markers to the video conversation, and I would be very happy because I I have downloaded most of the future text videos and made a few simple pages with a few mark, uh, time stamps and comments either quotes from you or or just small marks where with things I want to listen to and think about again. And that covers about most of my use case. For this, and in terms of using the available money time we have here on planet Earth, I would rather work with [01:25:00] active reading or maybe text in 3-D and embodied more rich or input forms for text because I feel a much more limited with the with the keyboard, for example. And so if I want, but this feels like I’m a party, it’s a party pooper.

Frode Hegland: [01:25:24] Oh, I don’t I don’t think you are at all party pooper. I think one of the things that is come up is that most of what we talk about is really boring. It’s exciting at the time, but it’s really boring for anything to go back. Let’s just accept that. But then there are the really interesting bits that people will have value, primarily us and other people. So the kind of simple use case you’re talking about, I think, covers almost everything. I think that if we had a way that we can easily live would be great. But then later on, go back and click on things to say this is something I’m interested in, or even to have the text chat log integrated into the transcripts. [01:26:00] And to have this and I I mean, imagine if we had a web page for this, you know, basic log in. So everybody here with it within the community could easily go in and add a little tag or add a little timestamp, whatever you want. So we have a human who is in charge of doing the basics of putting it in the then Adam. Your system could have it. Brandl could have it. I could easily put it into author export a monthly or whatever I want PDF, you know, as in whatever we want, but I would would do that or whatever. We just need a place to put it. So I think that’s really cool. Ok, Mark, I’ve been Ritchie for way too long, except you have yellow and yellow hands, so that’s my experience now.

Mark Anderson: [01:26:40] I didn’t worry. I mean, if it makes Adam feel any better, I’m going to do something else that might seem pretty open, but I think another thing to not overlook is whether what we’re doing actually is useful in the general sense of scholarship and knowledge building. So another way of actually maybe getting some input [01:27:00] into this is is, you know, if we if we look at it and say, Oh, we actually are we doing something quite apart from the general interest angle that is, you know, a useful model or just even just useful data for somebody, even if they go to it and say, right, here are hours and hours of how not to do it. And even that’s useful. I mean, you know, having just been through the actual process actually more often more illuminating than the things at work are the things that that didn’t work, especially if there’s some discourse around them. But let’s just hope. I suppose the point in reductio is that just being open to the point of whether what we’re doing is useful to people who are long term librarians, government archives, repositories and things like that, and I’ll leave it at that. I mean, I don’t have a fully formed idea. It’s more just being open to the point that we may find we’re doing something, and it might just be useful to pull [01:28:00] that in because often people, I think another thing that I find the academic community is really bad about doing is is is actually just get involved in things that are happening. Yes, they want far too much upfront. They either wanted to all been fully finished. So it’s all there already and there’s no risk for them. Or, you know, somehow it has to be new and sexy and fun. And I think that’s ridiculous. You know, this is just one of many things happening that people could benefit from.

Frode Hegland: [01:28:27] Adam, over to you. But guys, we really have to decide on on Friday, when we meet again, there will be a record of this meeting. We have to decide now what it should be and what’s useful for us if it’s entirely wrong come Friday. Fantastic. We’ve done a test.

Adam Wern: [01:28:42] Adam, I just wonder if in three or five years the the tools for meetings, I suspect they will have video meetings or video chats or Zoom meetings. I suspect they will have improved [01:29:00] a bit in three to five years because there is intense competition between the major players here, and I think there will be major improvements while in text in 3D. I don’t think there will be too much interest because there are lots more. Yeah, it’s so focused around games and other things. So I think it’s there is a bigger void in in, for example, text input or or text in 3D. So if we want to do something unique for the world, I think it would be more useful to work with the three dimensions, for example.

Frode Hegland: [01:29:50] Yeah, the thing is, we know each other well enough now in this group for us, for me to be pushy. Let me just be honest, this [01:30:00] has to be the beginning of a record, and I think Adam, that is completely correct. So what we’re going to do, I’m going to ask for this today to be transcribed into a word document and. Or should it be in a Google doc, what is actually easiest or what is most useful for us right now? Well, I mean, the transcription ask for it to have inserted the the chat, the text, as well as it can which names should be listed. That in itself is pretty useful, right? But how do we make this an ever growing corpus again, should we? Should we just Google Docs the next year and keep extracting and keep it growing? Well, someone else can have access. [01:31:00] Sorry, sorry.

Mark Anderson: [01:31:02] Well, the Google doc, as I understand it, has no, no, no, no understanding what end point. So it is it is an endless doc. So it’s also compared to a word doc. It might be Raffarin. You could say there’s more resilience built into it until Alphabet goes under or so some new categories and we haven’t seen how.

Adam Wern: [01:31:24] But just to clarify, is this to actually extract value or is it an experiment to try to provide some pointers for other groups or the future? I don’t really get if it’s

Frode Hegland: [01:31:38] The value is based on being able to navigate dimensionally. My favorite example of that is basically select text and recommend F and see all the occurrences. You know, at some point we should be able to I should be able to say, I want to say everything Adam has said, but not when this other thing happened or whatever. You know, that kind [01:32:00] of interactions is what we want to build, because some of the stuff in here is truly amazing. You know, as of right now, I have to say our VR discussions have gone in really interesting territories. I think that is an example right now that is really worthwhile, a student or anybody to listen to. And, you know, we go into all kinds of different directions. So it is about, OK, let’s put it this way we’re having coffee with a friend. We hear someone at the other table talking about something interesting. We say, Oh, we’re doing so, and so you really should catch up on what we do in the community. What is the entryway? I think this is what Alan’s been pushing. What is the entry way they should have? Some of it will be editorialized. We release a book once a year that’s got an introduction and article, so that’s one thing. But to be able to refer to when Rafael said this and Peter brought up this really interesting thing that has to be the interaction and today has to be the test case. Right, [01:33:00] yeah. Peter.

Peter Wasilko: [01:33:03] If I look like this, maybe have. Once in a blue moon, maybe once a month, this dedicated question to the future of cooling, just to talk about development pools and to get those of us that are building systems to see if we can move towards getting some kind of a test bed platform that we should all be contributing code to and leveraging what we’re working on individually so that we start looking at technology integration. If I have a really great school for doing a visualization and Fernando has a way to do a 3D visualization off of that, we want to get to a point where we can start changing its engine code between one another and building large driver system.

Frode Hegland: [01:33:51] You got what, Alan. A Web3, you guys.

Alan Laidlaw: [01:34:01] Ok, [01:34:00] so we’re talking about two different ends of a spectrum, the way that we’ve been talking about it is what is the ideal format for traversing this? And then the other side is, Hey, we need to have something by Friday. Right? So these are actually two totally different streams. The thing on Friday, Google Doc, maybe if we rely on an app, you know, we get more out of the box, but it’s less timeless. So like my my default is HTML. Like just freaking, let’s put it on if IP fs or something just have a heavy HTML format, for starters, you know, if we want to add a compact, here’s what it’s about, you know, and then build out from there, you know?

Frode Hegland: [01:34:58] Ok, so we have agreed that [01:35:00] I’m going to ask for this to be transcribed into a Google doc. We will all have editing access. Anyone else can go in and and view it. And I hope it’ll be ready by Friday, he’s quite busy now, but there we go. And then we will all play with it in our own sandboxes. And if we have other requirements or needs, we will discuss that over time, right? So let me put it this way, on New Year’s Eve, we went to see Emily’s two best friends who are godmothers for Edgar. One has two kids. The other one has three kids. So there was six kids altogether. We had a nice dinner. Super nice day. But there was one giant elephant in the room. It was the warmest New Year’s Eve on record in the UK. I am tired of just talking. I think we [01:36:00] have to act like gun crazy Republicans and stormed the hill, and the only way we can storm the hell is if we have a target. So this is our first target, we might have to give us a brand which is useful, we’re giving ourselves a beginning thing, GitHub. Yeah, GitHub is interesting. That’s a bit beyond my everyday technical expertise, but I’m sure you guys could teach us all how to use it. And should we just yeah, we could just do pull requests

Alan Laidlaw: [01:36:32] And extract justice and link as we need and it could even show there’s even a Google Doc format that we could use, but go ahead. I mean, a get doc.

Frode Hegland: [01:36:45] Ok. Why don’t we? No, no, that’s a very good interjection there. I’ll get in touch with our transcription dude. Danny Lowe will be watching this and try to invite him, maybe for next Friday [01:37:00] or Monday. And you will give us all those who need it. I’m sure Ed Rendell and Adam don’t need it, but a bit of an introduction to how that should be used. Would that be useful, Brandel and Adam? It Would it be easier for you to use stuff if it’s if it’s in that environment? Maybe.

Mark Anderson: [01:37:15] And a Google look,

Frode Hegland: [01:37:17] No, no.

Mark Anderson: [01:37:21] Uh, I don’t I avoid using it as much as possible, but mostly because I despise command line tools. I use it for everything else. No, it doesn’t have an impact for me. If I can get data, it doesn’t. I’m a very chaotic person in terms of data sources. I’m not particularly concerned if it’s a Google Doc or if it’s the stuff. I mean, by virtue of this going on on YouTube, it will be transcribed to the extent that YouTube does, and that’s a really, really interesting starting point. I have a screen grab that I’ve taken of the today’s proceedings such [01:38:00] that I can try to do that. That process of ascertaining who’s speaking when such that we might be able to kind of integrate that with other things. So no particular representation matters very much for me. More the provision of specific data is, is, is the important part.

Frode Hegland: [01:38:21] This is exciting because right now a thing happened. There was a proposal and there was a point against it. So this is something that may want to be listened to in the future, right? So in the way that we have to learn to speak to Siri or Cortana or whatever, it takes a bit of literacy to remember to do that and to say things, maybe we should think about there being another person in the room now with us who can’t interact with us. But so that we learn to say things like, Oh, that was interesting, or we even give this person a name. Right, so we say we [01:39:00] can’t do that. Hey, Jane, because I won’t say Syria. Right? Hey, Jane, what Brendel replied to Alan was really interesting. And then we have a tag, right? Can we develop a language for that? Should we have a virtual character?

[01:39:19] I like it.

Frode Hegland: [01:39:23] And of course, we will have to call Dr. Doug Wright for Doug Engelbart,

Mark Anderson: [01:39:29] That works for me. What we’ve got to say, I

Rafael Nepô: [01:39:32] Know I’m saying that I love having somebody in the room that will slowly optimize how we have. Conversations and discussions in a way that is useful not to be transcribed or to be recorded for the future. So if somebody is here, then they can ask, OK. You just mentioned the word hypertext and then could you quickly define it? And then that is a snippet of information to be used in the in [01:40:00] the transcription. So it’s not a full text transcription like we’ve been doing. It’s more, you know, getting bits and pieces and constructing as we go. It kind of breaks the speed of conversation and the flow of conversation. But it’s it seems to be better to document it rather than just having a raw, raw data file of of mumbo-jumbo that has to be sorted later.

Mark Anderson: [01:40:25] You’re talking about having a basic a human chairperson.

Rafael Nepô: [01:40:28] Yes. In the beginning, there will be more input from us into the text, into the document that this person will create. But eventually, the corpus of conversation will be structured enough that it’s going to flow a lot easier. So it’s it’s a little harder in the beginning, but it gets easier and easier as we go.

Frode Hegland: [01:40:56] But by the way, I’m sharing screen now to show that I’m [01:41:00] thinking of calling it Doug discussion and the reason Doug is all uppercase because it’s kind of an acronym as well. Dialogue for online understanding. But that’s not new. But what I’m saying is, you know, we don’t have a person, an actual person, which the raffle, I think, would be a good idea. But should we just try to figure out a way to speak to the camera sometimes, so to speak? And save that becomes useful or becomes entirely strained, I don’t know. I like it.

Mark Anderson: [01:41:40] Ok, cool. Interesting experiments do. I mean, in a sense, and it may prove to be all wrong, but we won’t need to be try it, which is the very reason to try.

Rafael Nepô: [01:41:52] Don’t developers have the the rubber ducky method of developing where you talk to the rubber duck to figure out the issues with the code? [01:42:00]

Frode Hegland: [01:42:03] Adam, just pin something. How did you do that? And what did you do? Was that on your screen or was that?

Adam Wern: [01:42:14] Yeah, but there is if you do reactions and the press, Hamburg or the more options, you can put any emoji up there. So we could have a secret. And perhaps you could do keyboard shortcuts in a way for some of them to have markers. The problem is that Zoom removes them from the recording. I think so. So what I’ve done in the past when I did some graffiti, at some point when I added text to my window was that I had to run it through a graphic or obese studio and add a web page on top of myself. Transparent web page. Wow. [01:43:00] So that’s the way to add more information on the screen, either text or symbols or markers.

Mark Anderson: [01:43:07] Literally exploding head.

Alan Laidlaw: [01:43:09] I know. Yeah, this is this is an evolution of the language right here. I mean, this could really become if we had a regular way to just visually. I mean, it’s these days pretty trivial to say, look for the head exploding and then tag it purple.

Rafael Nepô: [01:43:25] You know, let’s get somebody that writes in Pitman shorthand, and then we can just tell people, look for the squiggles, we can quickly find what they need.

Frode Hegland: [01:43:34] Well, so interestingly, with that issue, so, Randall, on the Mac, we have good live speech to text, but I’m not sure about how those libraries are accessible because I’m thinking maybe even if just one of us, like maybe me as the host has a basic app running tries to do what everyone’s saying. But the key is that if I choose to do those little insert things, it’s [01:44:00] done that they can. Then later on, when you have the YouTube transcription, they can know where to insert them. So whoever has a little thing like that, we have enough text to find those points. Does that make sense? Would that be useful? Well.

Mark Anderson: [01:44:16] Yeah, yeah. If you have some kind of reconcilable time coding system, then it should be possible to put all of those bits and pieces together. Absolutely.

Frode Hegland: [01:44:27] The reconcilable time code system is amazing because that is part of the Zoom video. It’s in the lower right hand corner, which is such a gold mine for this, I think.

Alan Laidlaw: [01:44:41] Gosh, that is brilliant. I see that I see the future. The immediate future of Zoom chats being filled up with these coded emojis so that they can be extracted later on for particular meaning, like that’s a that’s a platform innovation.

Mark Anderson: [01:44:58] I love it.

Frode Hegland: [01:45:03] I [01:45:00] promise not to do any more investment encoding this year, but well for a few months, but maybe this. I mean, it’s it’s insane, you know, how I love the tap back thing on messages, right? We’re basically desperately like little animals trying to claw at the door. I want to do a tap back on video. All right. It’s such a basic thing, please. Yeah. Apis of the Universe. Let us build it.

Rafael Nepô: [01:45:32] Uh, follow the white rabbit. I’m going to have to take off.

Alan Laidlaw: [01:45:36] There may be an argument for Tullio video here, honestly, because it’s far more open. What you can do?

Frode Hegland: [01:45:44] Sorry, I just want to say it’s a little rough. We’ll be here on Friday, right?

Rafael Nepô: [01:45:48] Yes, I’ll be here on Friday. Now, my schedule is back to normal.

Frode Hegland: [01:45:52] Excellent.

Mark Anderson: [01:45:53] Thank you.

Frode Hegland: [01:45:55] Twilio VIDEO Allen, tell us more. I got to go in a few minutes, too, but I’d [01:46:00] love to hear about that.

Alan Laidlaw: [01:46:01] Yeah, it’s it’s just, you know, the same way that we handle SMS and and all of the other tools, except that we make them developer friendly first. So the APIs are open and completely, you know? Hackable, right, you just have to have the idea, and it’s very easy to spin up. Um. An instance of Twilio video it’s not going to have it doesn’t come with its own interface, but that’s the that’s the limiting factor and also the opportunity you can make your own zoom. And then invite people, et cetera, and there. I think it’s pretty much almost feature matched with Zoom at this point, you know, like blurry background stuff. But [01:47:00] but then you have you can do way more on tracking the channels.

Frode Hegland: [01:47:05] I was just thinking our unique selling point could be Laurie foregrounds. Love it. Love it. Sorry. Mark, what do you want to say?

Mark Anderson: [01:47:17] It’s just a quick interjection that follows on from something that Alan said earlier about, you know, there being there’s the the production end and the are the tool and the content. And I just bring up another thing that I just there’s another thing we’ve been fishing beyond, which is in a sense that the manner of delivery versus sort of what goes in the grinder, the other end. And I think the first is more tractable at the moment. It it’s naturally it’s kind of thing we like to do. We think, Oh, what’s the really nice sort of sexy outcome we’re going to produce? It’s going to make everyone want to watch this. And that’s because that feels like the thing to do. But probably the more the [01:48:00] more useful and long term good thing we can do actually is know. Follow on on the things we’ve just discussed and go through a couple of cycles. They’re probably quite painful things that didn’t quite work to move from the from the state of, yeah, well, we also have in all these channels to yes, practically today, you can sort of do this and we can posit that these other things are can follow in. Not not maybe not many today, but you know, we can certainly predict they’ll go long how it quite looks at the end. Unless sure, I know some people want it handed on a plate with a pretty bow and nice fonts and everything. But I think a lot of people who are involved will actually come for the real meat of the thing, because that’s the excitable and useful part.

Frode Hegland: [01:48:47] Alan, can you tell us more just to go on that hardcore thing? What would it involve to make your video?

Alan Laidlaw: [01:49:02] Well. [01:49:00] I mean, we could spin it up, I could spin up something, and if I can join Friday, I could probably have something to show on Friday. It would have no interface, so I need to think about. What’s the bottom line needed? I didn’t suggest it earlier, because I think it’s overkill on the development side for what we’re talking about. But. If we can get the transcription with emojis added, let me look into that. Otherwise, Zoom does the job, you know?

Frode Hegland: [01:49:45] Yeah, that and to be able to have the texture and the transcript. You can have that. So that’s very, very exciting, and it’s a bigger question, but [01:50:00] now that I’m a resident of the Oculus Quest, where would we and I’m asking everybody, where would one or two of us sitting with a headset and a chat like this fit in until we have face tracking? It would just be kind of stupid. We wouldn’t really have an environment, right? Or is that not correct? I mean.

Adam Wern: [01:50:25] We already have Peter, that is a photo. So it’s not. Less dynamic than that, but so if he was an avatar that didn’t really face track but was there and moved around a bit. Maybe that could

Mark Anderson: [01:50:43] Be awkward to build an avatar by the end of the week?

Frode Hegland: [01:50:46] Well, I mean, the thing is, this is obviously a longer term question, but it’s relevant because we don’t look at each other in detail all the time, but the ambient. Are you paying attention and are you excited as useful? Right. And it’s social. You know, [01:51:00] this whole metaphor as I am walking over here kind of stuff is currently a bit ridiculous, but clearly we are, you know, something is brewing. So if we want to take this record into that world where at some point there’s a ritual representation because we would all like to be in a space now rather than in a box, right? At least to try it.

Brandel Zachernuk: [01:51:25] So in terms of people being able to be co-present, at the least worst option that I’m aware of at this point is can we can we can play around in there? I’m not aware of the ways in which people who aren’t in virtual reality can participate with people who are. I think that some of the video sharing some of the video chat platforms have started to talk about the way in which VR participants can participate. But the thing is that that view has to be completely synthetic. So, yeah, I don’t think that there’s [01:52:00] anything good yet for being able to manage multiple people. I guess a high fidelity might actually do it

Frode Hegland: [01:52:11] Because for all my joking about Alan’s camera being on off and then coming on and saying, Oh, I don’t want to look at you, I would much rather look at a real alan and low pixels like we have now, rather than a sharp, high rise cartoon version.

Brandel Zachernuk: [01:52:25] Right, and yeah, and the other aspect of it, which is why we are for the time being, gives you different, but in many ways worse signals from the perspective of conversation is that had the emotion that we have in our faces is much more important than whether we have six degrees of freedom tracking of our heads. But you get in the video jet anyway. And so people, people being VR participants until such time as we have reasonable face tracking as part of it. Oculus has announced that they have intention to put some trackers in and onto the [01:53:00] face where we’re in kind of a bind with regard to being able to capture the emotional performance of a participant within virtual reality. So, yeah, like I like more immersion and VR is one way to do that for some things. But for co-present social interaction, it’s actually worse at this point. Ok.

Frode Hegland: [01:53:22] Fantastic. I look forward to Friday, Alan and Scott homework. I’ve got some homework. Brandel and Peter, if you just happen to have any inspirations of how this might fit your own workflow. Adam, you too. Please feel free to note it down. It is important to me that these meetings are pleasant, so I don’t think we should expect anybody to do anything ever. Outside of the meetings, you know, other than the ones who really passionately want to. That’s why I’m a bit wary of the kind of actual normal work side outside. But if you or you [01:54:00] won’t be here Friday, Peter, OK, Monday will be good. Ok, I have to rush downstairs. Anything else in closing?

Adam Wern: [01:54:10] Yeah, I mean, this is my library. I stole five hundred book images from from tools for thought. Thinker because they had a very nice home page. That just screen is great. The book covers. So these are five hundred books because I looked behind me and counted about 500 books in the bookshelf behind me and wanted to see what that would be in. If you if you put yourself inside your library, you can pick any book. Now it’s repeated there, but it’s just one hundred books repeated. But if you stood inside the circle of all your books and just pointed at a book and and got it because you can show the PDF in the browser just got. [01:55:00] So it was just after Brandel talked about three E.j.’s. I just wanted to try it out. So I want to see the library. It’s not my selections. If you if you find the crazy Atlas Shrugged books here and there, then it’s not mine.

Alan Laidlaw: [01:55:20] But imagine this four topics. Imagine this for your world of like, what actually is going around your brain? And we have a conversation and you can you can see it float behind you as as you’re honing in on. You know, this is about Doug Engelbart. This is about Xanadu anyway.

Adam Wern: [01:55:36] Yeah, I tried kind of doing comic bubbles over my head in one meeting, doing graffiti on my own head, writing thoughts instead of doing it in the chat because the chat is a bit disconnected from the pictures. I would love to see that text coming inside inside our frames here, but it could be distracting, [01:56:00] of course, but I think we could find a good way of doing it. Ok, quickly

Frode Hegland: [01:56:07] I got to go. To close it down. But Adam, you are going to start working in VR soon. You’re going to get yourself an Oculus.

Adam Wern: [01:56:15] I looked at you still need

Frode Hegland: [01:56:17] A story that sounded like a question. It was not

Adam Wern: [01:56:20] Yet. I looked at the prices. They looked at the availability, but it is an issue of actually registering your Facebook account because, yeah, yeah.

Mark Anderson: [01:56:30] So I have to drop that requirement. Is that true? It’s true in practice. I’ll take a look at the transcript to see whether it’s the case. I did reset my Oculus recently. I think that they have dropped the requirement of it. You need to have an Oculus account, but it’s not a Facebook account. Oh, OK, great. I’ll take a look and I might be wrong, but that was my understanding from the last two.

Adam Wern: [01:56:53] If that’s true, then I got the Oculus and

Frode Hegland: [01:56:57] I understand the concern about a Facebook [01:57:00] account. I’ve always seen Facebook is being completely public. Anything I put there, even though successful friends, something I’m completely happy with having public. But of course, if we put our knowledge work into it, it becomes something different. So yeah,

Adam Wern: [01:57:12] But it’s not only that, it’s also what you’re supporting in the world and kind of, yeah, but addiction services. So it’s a political statement. It’s and to me, it’s just as important as a climate crisis because it’s it can fuel the climate crisis by doing well. I don’t want to get into that too much now, but it’s much more than being a private and exposed. It’s many, many things

Frode Hegland: [01:57:40] I don’t we cannot live in the metaverse. We cannot have Zuckerberg on our virtual future. There’s no question about that. Even though it’s an expensive device, I look at it as a disposable. It’s something to learn what it feels like. I completely abhor much of what’s going on there, but you know, we all have to make our own decisions as in what guns were prepared [01:58:00] to pick up, so to speak, even though there’s blood on it. We look forward to Friday, guys. Please remember if there’s someone you want, then it’s on the list. Just invite.

Peter Wasilko: [01:58:13] One last point I saw that there’s some project on Kickstarter trying to do a VR headset with the Linux platform. So we’ll have to see how that pans out. And of course, we’re waiting for Apple to come out with its offering, which will hopefully be sooner rather than later.

Frode Hegland: [01:58:30] Yeah, but by the time Apple comes out, we need to be literate in how it feels to be in VR. I think that’s a really key thing, you know, because a lot of it is if it’s theoretical, it just you’ve got to go into the forest to understand the trees. Ok, love you guys and look forward to Friday on Monday. Bye.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *