Rough Transcript : 14 August 2023

Video: https://youtu.be/-rPFAk2lOjk

Chat Log: https://futuretextlab.info/2023/08/15/chat-log-14-august-2023/

Frode Hegland: Hello, Allo. Oh, my God. I’m one minute early and there’s already two of you. Not a bad thing. How are you, Leon? I haven’t seen you in a while. I know one thing. You are muted.

Leon Van Kammen: Oh, sorry. Yeah, I’m fine. I got back from a holiday and I’m getting back up to speed.

Frode Hegland: Good. Good. How was your holiday?

Leon Van Kammen: Yeah, it was great. We visited a lot of cities in Germany and Austria and Czech Republic, so yeah, it was really nice.

Frode Hegland: Oh, excellent. I think there’s something. Sorry. No, sorry. Was something strange with my audio, but I fixed it. Yeah. What were you saying?

Leon Van Kammen: Yeah. And how was it here?

Frode Hegland: Yeah, it was good weather. It’s been on and off. And most important thing in my world is that this morning, for the first time ever, my son managed to open the outside door by himself. Oh, wow. Literally a milestone of I can open the doors. Okay, great. So that was amazing. Nice. How about you, Mark? How’s your summer been so far?

Mark Anderson: Oh, fine. Yeah. I’ve just been away up north, visiting family. So just got back sort of yesterday evening. At some busy preps for hypertext and finishing up on that and the sort of final couple of months of the project academic project I’m working on. On data.

Frode Hegland: Call. I had a bit of a stress earlier today that I think we all need to discuss its about. I what I did was I found a mac app called PDF Pals and it actually does a whole PDF. It doesn’t have a limitation with the how many characters and all of that stuff, tokens and so on. So I guess they have a nice stitching mechanism built in. That’s why I ran it through some of the PDFs I have with with our work. And it’s staggeringly good at doing summaries and keywords and key phrases and stuff. This is something Mark and I have been talking about for a while. So I wrote this little thing that I’ve put in the chat for us. I think that, well, hopefully a few more will will join us today. But I hope that as a community we should probably decide that one of the key topics for our next book and symposium needs to be I completely ignoring the dangers. Not because they’re not horrible, they are, but other people are dealing with them, but only focusing on who do we want to be with? I What kind of person? That has to be first and then kind of look back from that.

Frode Hegland: We don’t want I mean, of course we want I to do the boring stuff, but we don’t want it to do our thinking. So how can we? What is that balance? It’s obviously tricky, but also important because last Friday I was with a group of dads an evening through someone anyway, and he from one of the major universities in the UK. I learned that they use now AI to sort applications from students. They get so many applications for these high powered programs, and the applications are about 30 pages at least. So for decades they’ve used human people to summarize and put things in spreadsheets. So the people who need to make the decisions have it available. It turns out the AI summary an organizing is better, meaning less errors because humans make errors. And that’s just the reality. And they have seen now that the majority of applications coming in are written by AI on behalf of students. You know, it’s validate that. Um, it’s just they’re feeling that it’s not.

Mark Anderson: In other words, it’s not. It’s just. It feels good. So I see it’s less effort and it’s done in a spreadsheet faster. But I just, I am genuinely interested in because.

Frode Hegland: Which one validates.

Mark Anderson: Enough to rest on?

Frode Hegland: Are you talking about validating the organizing or validating? Yeah.

Mark Anderson: So, so if people are saying it does, it does it better. I mean, we seem to be in a loop at the moment where as you just said, so people are using effectively some kind of scripting device, which I want to make sure I’m talking.

Frode Hegland: About two different things. I just want to understand what you mean so I don’t misunderstand you. Are you talking about using AI for the the analysis of the applications? Are you talking about the students using AI to write the applications?

Mark Anderson: Well, both because they’re interconnected because it’s sort of kind of garbage in, garbage out loop. If I mean, what people are doing is, is they’re, you know, I mean, arguably the problem is that that the overall form is not fit for purpose because if somebody has to use a machine to basically fill out the form that asks for too much information, they don’t really understand and then some on the other end has to take a machine to unpack that. Um, the issue is, I mean, you know, there are two things there. So one is they have a serious problem and they need to they need to actually think through their process. But but the other part is that the other part is that and this is a perfectly serious question is how are they actually validating it? Because people tend to say, oh, it’s much better. And then when you ask it sort of, well, it feels right. And I don’t doubt that’s true, but I’m just surprised that they’re not testing a bit more deeply because otherwise, um.

Frode Hegland: Okay. I can answer. I can answer that. First of all, it’s not a couple of forms to hang on. Just wait for Mr. Peter to join us. Arbiter.

Speaker4: Good morning. I’m Brunching with Mum, so I have the camera off. So you don’t see me eating.

Frode Hegland: Okay. Sounds good. We’re just talking about how universities now have started using AI to process applications. So, Mark, to address your question, not necessarily answer it, but from what I learned on Friday is that, first of all, it’s not just a matter of filling in forms. A lot of it is people’s recommendations, extracurricular, their essays, all these things. There is a myriad of things because this is for advanced degrees. It’s not just doing, you know, a transcript, but it also is things like translating different countries grades into UK standards. There’s a lot of things like that, and they have found that the error rates and this is someone who’s been doing it for many, many years. The error rate he found when dealing with humans compared to dealing with AI, where he is the one in charge of building the AI models is less. So what the exact numbers are, I don’t know. But yeah, we’re talking about not misinterpreting, saying this guy is better than this other guy, but is quite literally, it’s in the wrong part of the Excel spreadsheet. Right?

Mark Anderson: And so, yeah. And so how much of that is actually AI and how much of that is basically just clever algorithmic algorithmic extraction, which is not AI as such. So, so in other words, it sounds as though someone’s done a much better extraction and translation schema. Well, it’s just the term AI gets used, really gets used really loosely.

Frode Hegland: A loose term. Yeah, The reality of it today, it is a loose term.

Mark Anderson: Yeah, but I don’t think we should be. I don’t. It doesn’t help us to be as loose in our in our understanding of it. Otherwise we’re contributing to the problem as opposed to as opposed.

Frode Hegland: Can’t they use ChatGPT? So it’s I.

Speaker5: They just how.

Mark Anderson: Do they do the provenance stack? So if you wanted to question.

Frode Hegland: Mark, I don’t know. Okay. And look, I know you’re a hugely critical person when it comes to AI, but all I’m stating in the beginning of this discussion today is the facts that the universe, a major university, has found AI to be better at processing this. Of course there are issues, right? All I’m saying is that students are also now been shown to use it for their applications. Students will be using AI to write papers no matter what teachers and schools and universities say. So this is literally my one minute intro, so I think it is useful for our community to look at how do we want that to be accepting that it exists, how do we want it to be? And it’s actually an incredibly hard question because as you and I discussed last week, for instance, doing something as simple as helping the library and software like I’m doing in reader, what for? Is it to help you find connections? Is it help you to learn? Is it to cut down on your workload? The what for question that you asked is really, really important.

Speaker4: Peter Yeah, I’d rather think that if I was a college admissions person, I’d be looking at an organization like Kaplan, say, and have them get my candidates to come in to a physical room in the real world where they’d have to answer some questions by themselves without using any tech so that I can see that it definitely is 100% their writing, as opposed to ChatGPT combing the very best admissions essays, merging them together and generating an essay, which gives me absolutely zero indication of whether the student can think for himself or not.

Frode Hegland: Okay, but that’s not possible because the particular university and the degree I’m talking about, it’s 2000 applications at 30 pages each at a ratio of something between five or 7 to 1 admissions, you know, to even get closer down the line. So of course, you know, having the human interaction is better.

Speaker4: Well, maybe a first pass by AI and then force them in a situation where they can’t be.

Frode Hegland: Okay. Peter. Peter, I’m not going to. Sorry. Brandel Just briefly, we’re talking. I just put into the chat here something. I’m just going to really provenance. For what? Okay, Well, okay.

Mark Anderson: Would you like me to explain? Because I know I get what you’re saying, but. But don’t. And there’s no need to be defensive because I’m not arguing against what you’re saying. What I’m what I’m reflecting is that there’s a massive there’s a massive interest in AI, especially in a situation where if it makes for each of us, if it makes my job seemingly easier, I have less work to do. It’s good if it in some way works against me, it’s bad, which is not unhelpful and simplistic. One of the issues, though, is when one makes a flat statement like, okay, I use essentially I fed it on to ChatGPT and I got a I got an answer out that was better. Then my point about provenance is, okay, if I want to then go into that and say, Right, you know, why do these ones float to the top of the stack?

Speaker5: Okay, that’s that’s, you.

Mark Anderson: Know, I need to be able to do that.

Frode Hegland: Yes, I understand that. And okay, so Brandel the discussion we’ve just started is based on last Friday, I spent an evening with a couple of other dads. One of them used to run admissions for one of the big universities in this country for one of the very advanced degrees. They admissions are roughly 30 pages of various documentation, essays and everything per student. They get something like 2000 student applications. They used to have humans do the basic analysis and spreadsheeting and all that stuff. They now use AI because in their view it does a better job, meaning less errors. I also learned that already now these applications are generally written by AI as well. So it’s kind of an arms race and it’s something we know it was just a palace Mark a people’s palace. So the reason I’m saying this is I downloaded this app, PDF Palace that I just mentioned out to Mark as well. And what it allows me to do is analyze through my own GPT key an entire PDF rather than a section. So of course they have a clever way of chunking and all of that stuff. And this today then made me really quite stressed because I feel the summarization in AI is absolutely phenomenal. You know, having gone through many of our articles and so on, I feel that it really gets the gist.

Frode Hegland: I also feel that PDFs, academic ones, are often quite hard to read. There is that specific language, there is the thin columns, all these issues. Anyway, so I got very stressed earlier today about the issue of hang on my screens acting up. What’s going on here? One second. No apologies, everyone. There we go. Right. Um, so, yeah, I’ll put a link in here. I think what we should do as a community is write a few prompts joke on purpose to our community, to invite people to write for the new volume and also to invite further people. Let’s say that we accept there are many issues with AI and we ignore them because others are addressing them. But what we want to do is try to imagine who we would like to be with AI. You know, do we want it to have like a person to speak to or do we want a sculpture? That kind of stuff. I really think it’s useful for us to go down that line and start asking these kinds of questions. You know, of course, clearly the children will be using I how do I want Edgar’s brain to be involved in an AI augmented system? Over.

Brandel Zachernuk: My issue is that. I kind of. I still don’t see why I am not. Like entitled to reject the entire premise of I don’t think it’s real. And I don’t think that the projection of I as as a as a phenomenon helps us a great deal. Like I don’t know if I linked or if anybody saw the article about how the term I was coined in the first place to avoid having to relate one a person or people’s movement to Norbert Wiener and cybernetics. I’m much more comfortable with the cybernetics, sort of like landscape framework and and implications for what this kind of work is because like most of the things that people call generative AI right now are linear regression models and things like that that yeah, people use. So when you were talking about the admissions, a friend of mine used to work in New Zealand immigration and Intelligence, and he said that the Hey Alan, sorry, we were talking about AI again it comes up. Um, and so in the end, essentially like the top 20% and the bottom 20% of immigration applicants are kind of admitted through a largely automated screening. I don’t know that he called it AI at the time we were having these conversations, but it’s the sort of thing that would be captured. And it it works. It works because the the context of the. The applications is so straightforward. Like if you give the same set of documents to the people based on sort of internal audits and reviews, then they will come up with the same decisions because they have really high confidence signals that are pointing to this is a person we should accept, this is the person we should reject and. And. Yeah, I find the sort of the nature of the discourse once people use these automated tools and term them to be so much more fraught that I don’t know what I get out of it.

Frode Hegland: All right. So one thing that I find really boring is discussing what AI is. It’s a worthwhile topic for a specific community. But I think at this point, I honestly think we just have to accept that there is such a thing that people in general call AI. Whether it’s strong. I was just an algorithm here. All these things are important, of course, but when someone sits down to work. There will be tools with this label available to them, right? And they can be like my professor and friend also of Mark. Les says it shouldn’t be called artificial intelligence. It should be called advanced it because it’s such a huge collection of things. Absolutely agree with that. But what I do think we need to do in the same way that we have and need to continue to embrace rich environments, AR, VR projectors, all that stuff, which is so crucial. I really think that we need to wrap our brains around who do we want to be when we use this? Do we? I don’t think our community wants it to do the thinking for us. And when Mark talked about provenance earlier, yes, I don’t think right now AI is good at giving answers, but it’s really good at doing a kind of analysis. But this needs to be a lot of prompt engineering. I was just reading up on it today. Like if you ask it to do something, it’s actually useful to go back and check itself. There are all these things. But if we can maybe look at, you know, in an ideal, let’s say we have proper research money just to build somebody’s office for student or professional. How would we want to interact with it? What would we like it to be? I think these are worthwhile questions.

Brandel Zachernuk: I’m sorry. I agree with that. I would also say, though, that we’ve been through a couple of movements and fads that have taken up a lot of oxygen and money in the form of things like cryptocurrency. Web3 for whatever that is, Nfts and all of that stuff. And there are times when it can be challenging to say like. What happens when we reject the entire premise of this and say that no aspect of this is correctly framed for us to be able to get a benefit out of it. And, you know, I feel like the scale of fraud that is accompanying AI right now is sort of means that we we need to be able to kind of reserve that right and say, what if everybody is either wrong or lying about this? So, yeah.

Frode Hegland: I don’t mind that as a specific discussion along the lines a lot of similar things. But I do think it’s it’s time to, you know, in the same way, I don’t know some of you you saw it I posted on Twitter this huge computer 50 years ago going into building, you know, someone holding up a tiny little chip saying it’s the same thing. And, you know, in the same way that we have accepted that the headset style stuff is going to be real soon, the AI stuff. I really think we need to let’s call it Amber or Azure. You know, let’s give it a whatever name. But that thingamajig because first of all, for summaries and analysis now, it’s absolutely brilliant. It. I think it is better than human. You know, I’ve been going through a lot. Mark helped me last week go through a few things of in reader. If I build in an AI thing just in ChatGPT, just to help do summaries of the articles, how best to present it. You know, these are important issues because goodness gracious, it’s actually very good. But beyond that, beyond just having more text from text, how much you know, where can we go? There are things we also discussed where extract keywords in this corpus and only show me documents where only show the keywords were at least that at least two of the documents have. But. But why is that? To see connections, you know, to to. It’s easy. It’s easy to moan about. I do it too. But when some when we moan to someone, they have every right to say, well, what do you want? So that’s why I’m hoping we can do we can talk a little bit about this is what I want if I have control of I interactions, this is what I want. I’m wondering if today if you look at the list that I sent, maybe we could think of some further questions and send it to the community at the end of the call. Leon, please go ahead.

Leon Van Kammen: Yeah. Um. I’m not sure if this if this is the right comparison, but. Could we compare this to something else? Like, let’s say if somebody tells told you like, hey, I, um.

Alan Laidlaw: I.

Leon Van Kammen: I’m really impressed by these self-driving cars. I will, um. I will get rid of all my employees. I will just use these self-driving cars. The error rate is much lower. Um, and like. I’m curious. This is a question back to you. Like if somebody says something like that, would you would you imagine like, no, I don’t want people to be like that. Would you not like your child to say these kind of things later, like, oh, and be emphasised about getting rid of people? Is that what what the stress partially is?

Frode Hegland: Perhaps I have a self driving car. So I drive a 2016 Tesla. I bought it new back in the day. Still have it doesn’t have the updated tech and I think it’s a great analogy for many reasons. First of all, when I come drive down to Southampton at stretches on the motorway, I use it to check my email, things like this. You know, I can spend a minute or two feeling quite safe doing that. Do I believe that Elon Musk’s promise that by the end of the year that he said five years in a row it will be completely No. So that’s half of it. The other half of the question is, do I want to get rid of people? Absolutely not. But, you know, I’m going through my corrections now, and I’m not saying that my software and what we’re doing should all be about someone doing a master’s or PhD. But it’s absolutely insane how many documents I have read that have to regurgitate the history of hypertext just to prove that they know what it is. Right. That there is a lot of filler out there. And in order to connect these things, to look at these things. I want to include the human brain. I think that is very, very important. But the human brain is a finite thing, finite resource. So getting rid of crud and defining what crud is, of course, is hugely problematic, no question. No one needs to bring that up. I completely agree. But there are there is just an acceptance that, you know, I’m a father.

Frode Hegland: I’m scared. There’s a lot of crap that’s happening in the world at the moment. There is a lot and I is one of them. So I’m not saying let’s just pile on the AI and everything. We’ll be fine. Absolutely not saying that. So many issues with taking up work, doing creative stuff. Absolutely. No question. But I really think that the issue of, well, what do you want then, considering AI is happening, it is real in some form, considering in some forces at Excel, at some places, at Excels, other places, it’s actually devious. I think we need to decide how we want it and I think it will feed also into our VR type work. You know, as Doug Engelbart said, dreaming is hard work. You know, the more I look at coding for the vision pro, the more I do different things, the more I see like we discussed last time here. Space management is absolute hell because you have more opportunity. You know, I is problematic because it happens in a black box and it’s almost infinite in a sense. But you know, what do we want? I see what you write here, Mark. I completely agree. But what is filler in papers is contextual and varies depending on expertise and point of view. Oh, absolutely. But the thing is, I think my thesis would be much better if I say for the history of hypertext, refer to Mark’s paper page so-and-so.

Mark Anderson: I’m sure. And I understand that. I mean, really what it says to me is that the the the issue about stuff, you know, things like there being sort of too much boilerplate or filler is that that’s our lack of attention to to the basically the framework of our writing. But that’s that’s a that’s a sort of community matter to decide. I mean, one of the things that I’m genuinely interested by I mean, because I don’t I don’t mean to sound particularly doubting of, of AI as such. I think it’s tremendously useful. I just I find most of the things that are put in front of me and told are good turn out to be someone saying, Hey, I don’t have to do this anymore. Uh, but without any, you know, without any quality control applied to that because look, I press, I press a button or I speak to this thing and I can answer back. I and I totally see that in a lot of basically bounded problems, especially ones that are basically more graphical than textual machine learning. Ai is doing very good stuff, you know, some of the medical diagnosis stuff. And in a sense, why wouldn’t it there there’s kind of scary thing there is. We still don’t know quite how it does it because it can’t tell us why it thinks this. This one is a good and this one is a bad. And that’s something that we I think we that’s far more important to concentrate on Then then the sort of the general part of it because we want to have trust. We want to have trust in these things. Otherwise we really do lose control and it becomes a magic eight.

Speaker5: Mark.

Frode Hegland: Mark, I agree with that. I agree with half of it. First of all, when AI finds a solution and statistically it’s better than something else like cancer detection as it has been used for a human operator often also couldn’t tell why. You know, it’s almost instinct. Let’s call it machine instinct, right?

Speaker5: I don’t think that’s true.

Mark Anderson: Is there evidence to support that?

Frode Hegland: Okay, here’s the thing. When it comes to.

Speaker5: Okay.

Frode Hegland: I simply don’t want to spend any time on discussing the negatives of AI because there are so many forums for it. You know, asking a question of I ask AI if Doug and I have been working together, it’ll tell you the truth this much and then it’ll make up crap here. Right. I agree with that 100%.

Speaker5: Right.

Mark Anderson: It’s not it’s not a negative, good, bad thing. I think you’re misunderstanding.

Frode Hegland: There are specific things it is very good at. If I if I ask it to summarize a document, that is something that it does, I think really well. Better than most humans can do.

Speaker5: Yeah, but but the.

Mark Anderson: Danger is I don’t see it in a sort of binary zero sum game of it’s all good or it’s bad. The point of raising some of the things that I’m talking about is it’s a conduit to understanding how to get to the question what you’re asking of how do we want to relate with it? Because if we don’t really understand how it works, is it just like joining a new religion where the guy in the, you know, in the in the big robe tells you what to do?

Frode Hegland: No, it’s not when you talk. You know, we have an amazing group of people here. Let’s take Brandel, for instance. Right. With his position and his experience, when we asked Brandel a question about VR. I don’t know how his brain works, but I have established trust. Right. And to reemphasize asking questions of AI at this point I think is highly problematic for that reason. But to ask AI to help with analysis, not just give us an A, B answer, but to help us. Because one of the key things that’s on the OpenAI website, it says, Do not trust it. Ask it to check its own work. You know, look at the different data, take it into Excel. And that’s where I’m really talking about what kind of copilot, so to speak. Do we want ideally AI to be? I mean, I started framing VR as being a thinking cap. That’s kind of my joking term for it now. That’s how Ismail and Vint I’m presenting. It’s a thinking cap. You don’t use it all the time. Same with AI. The stuff it does. Well, how do we want it? And it’s really, really hard because even the summary thing, where should we best present it? Even if we accept it works. Should you get the summary when you read the document? Should it be for a corpus? Sorry, Leon. Please go on.

Leon Van Kammen: Yeah, I think actually we should. It would be great to, to use it as a as our personal mark Anderson because, for example, Mark was just trying to clarify the question and not immediately accepting the question. And I think it would be great to sort of have a minimum requirement as a person to I that you want your question to be questioned in the first place in order to get maybe to a, let’s say, an intermediate answer or but I think that that is a bit, um, what I find um, yeah. A bit annoying about this, this chat GPT kind of, um, solutions is that yeah, people use it and they, they, they can ask a stupid question and they get an answer and they think it’s great and wow, it’s fantastic. But I think that is a very, as Mark Anderson said, a very garbage in, garbage out. And I would really like to. Yeah. Be a person in the future, which is enthusiast about a certain, uh, a minimum standard of AI input output between my brain and this machine where it doesn’t give you these, um, well, these, these sort of direct answers which are most of the time a bit silly.

Frode Hegland: This is exactly what I wanted to discuss. Leon And I’ll give the mic over to Alan in a second. One of the things I thought about while walking today being very frustrated was when you have it analyzed, let’s say an academic PDF, don’t just do a summary, but do other things. Like are there any clear logical fallacies in here? Are there contradictions? You know, basically, you know, click button, but you have a few. You can choose what you apply. You can pre-make that. So when you open something up, it’s got a bit of basic analysis ready for you. Similarly, when you author work, it would be really useful if you get all the same analysis on the on the exit, on the export. You can see, okay, this is how I seize my work. It is wrong. Fine export. But no, no, these are actual issues. Let me go back. So yes, the conversational mode in this sense is really interesting to discuss. Thank you, Leon. Alan.

Alan Laidlaw: Yes. So, um, yeah, I like that point as well. Um, because it’s a focus less on. Uh, the AI and its capabilities today, which, you know, it’s a very open horizon, right? Like there may be hard limits with hallucination. Um, there or that may be solved tomorrow, right? Uh, but it said focusing on what we can control, right. Which is like how to create better prompts. Or one of the things I was thinking about yesterday after reading about Azure’s private enterprise offering, which is pretty exciting for enterprise documentation. And also realizing that it will most likely fail for reasons like you could do its job perfectly well, right? Like suck in all the documents from an enterprise. And have intelligent answers and still fail because there are impediments built into the enterprise like permission layers or the volatility of some kinds of knowledge over others that it will cease to. It could do its job perfectly and still not be a sufficient job. So what needs to be looked at are what are the. What are the opportunities in the interface in the in the virtuous cycle between man and machine? Um, what can be built into like in that toy example of an enterprise? Lm.

Alan Laidlaw: Because it’s inherently like sort of discontinuously collaborative with other people in the enterprise, you know? Would that be a good place for you? Ask a question, you get an answer, and it’s sourced somewhere. But if you go to the source, you could see that like Mark asked a totally unrelated question, but it it touched on this bit of found material. Would you like to see what Mark was asking? Right. And a kind of a to put a pin on it. The the interesting problem with prompting is that there are so many problems that we don’t know how to frame yet. We don’t know how to phrase. We don’t know what we’re asking for. And in that light, like actually going through a corporate slack, the historical knowledge there could be more useful than talking to a GPT version because you could see people talking about things that you don’t even know how to frame The question. And then it reminds you right in the current paradigm, there’s not really an interface for that sort of synoptic view or discovery. I think there’s a lot of interesting opportunities with that. Which gets on to another topic, but I’ll hold that for later.

Frode Hegland: Brandel.

Brandel Zachernuk: So, um, the, I think another issue that I have with, um, with LMS at this point, uh, similar to what Alan was saying, is that, um, the feedback loops, the cycles of work and how, how it’s done are just vastly too large, um, in the context of work with new untrusted people. You know, I have tended to give them very small jobs and have the ability to supervise their work on a moment to moment basis and give them, um, momentary sort of corrections as I sort of understand what it is they seem to be trying to do in order to make sure that I can elevate and improve the sort of the the working process that they have. And that’s when I’m asking other people to do work, when I’m asking when other people are working with me, then I’m asking for even smaller chunks of work. Um, the problem with generative AI of all forms and was I’ve actually gotten back into drawing and painting. I made a minecraft character for my daughter, which was very fun. Um, is that when I draw or paint, I don’t want to make a picture for me. I barely want to control, to take the pen, to be able to handle a particular contour or fill in the shape that the absolute joy of creation and not everybody is creating with a level of volume that means that they’re permitted to take joy in every single stroke. But, um, I get a lot out of those things. And if somebody were to just make another picture like mine, then that that’s a total cross-purposes to what I’m doing. Um, I, I don’t think it’s an intractable problem for AI in the sense that it would be possible for people to build generative tools or build assistive tools where they were, um, where the, the learning models are right down into the details of every single pen, stroke, every character, every rewording and all of those things.

Brandel Zachernuk: Um. But it’s a until until I have tools that are trained on the process. Yeah. Like you say, like the process versus the result. I think that. I think I’m basically not I haven’t seen anything that an AI can do for me that I’m personally interested in. When they have just these massive sort of turns where they’ll go away and write an entire paragraph about something that I don’t I don’t have those moment to moment abilities to apply course correction to. And, and I also don’t have really the ability to think about how I can earnestly incorporate the output of that into a technical workflow in the sense of you know, like I was writing a video game with my daughter and what I do there is I say, okay, so now the ball is dropping. Now let’s make it so that the ball drops faster every moment by creating momentum or acceleration. And and now let’s put a shadow under it and let’s make a little bounce sound and all of those bits and pieces, I don’t I don’t see I can’t I don’t know how I can write a program and have help like that or have write a program where helps in the program like that. So yeah, and those are not impossible problems, but they are not they are manifestly problems that are not being solved by the current crop of discord.

Frode Hegland: Right. But Brandel you of all people, you use a lot of AI every day.

Speaker5: Right? Don’t use Siri, don’t use.

Brandel Zachernuk: Speech recognition. I hate it. It’s garbage.

Frode Hegland: Have you ever taken a picture with your iPhone?

Brandel Zachernuk: Yeah, I don’t love the corrections that it applies there either. I used to have a, like a like an SLR and I liked the output better.

Speaker5: Oh, I.

Frode Hegland: Completely agree with you. I’m exactly the same. But there’s a couple of things. First of all, you’re an extremely bright person. You know, you do things differently. I’m just saying that the students are going to be using AI no matter what we think, right? They’re going to use it and they’re going to use it either to cheat or to augment their brains or both. It’s just going to happen. I don’t think we have any say of that at all. But all I’m saying is that for the available stuff, some of it we should just tell them to run away from other stuff and say, here’s a better way of doing it and do better tooling. I mean, one thing I’ll show you here briefly. Um, so this is the library and reader right now, and all I’ve done is this is completely fake in terms of layout, just here, this section of this paragraph that is a summary of one of the documents, not this one, because I’m playing with the different ways. It’s really pretty brilliant, right? And if you then go and open these documents.

Speaker5: Uh. You know, it’s.

Frode Hegland: Readable, but, you know, there’s a lot of stuff unless you really know that you want to. Unless you really know you want to read it. Why not have the why not use this kind of a summary to help you determine that? Right. It’s probably useful to help you get to that stage, but what about the further stages of getting it in here?

Speaker5: Well.

Mark Anderson: It is interesting because what you’re talking I mean, it’s interesting that Peter put, you know, obviously a back prompt back from something. And by paragraph three, it had made up two papers. And that’s shows actually just how how in a sense, how powerful the system is. It’s getting it wrong. And I don’t see that. I don’t see that error as it being sort of evil as such. It’s sort of doing what it’s designed to do. It does show that perhaps people designing it aren’t applying as much thought as they should to all parts of it. You know, I guess it’s the sexy part is the algorithmic code at the heart and the rest of it’s a bit of a add on. That’s often the way software rolls, but it’s it’s well, it’s very it’s actually it’s actually very pertinent because I think this thing is summary is sort of quite interesting. And I think what I, what I hear, what I hear reflected back to me is very much things either speak to somebody or they don’t. And it’s actually quite personal. And it’s not a it’s not a right or wrong. I don’t see it as a sort of a binary thing. And it’s actually quite contextual if it’s choosing the best taxi rank or the best sushi house to go to, probably no big deal. Something perhaps closer to which surgeon might do surgery, say on your back or your heart or something. Um, I probably might not be so ready to just take a supposed solution. And that that’s a problematic thing at the moment. It’s a very immature thing. Um, and the struggle for a society is that it’s also very beguiling. It gives you an answer.

Speaker5: It looks. Mark.

Frode Hegland: Mark, I’m sorry for interrupting you, but we know it gives rubbish answers. That is one of the knowns. I don’t think AI is good unless you’re asking like a basic physics or nature thing that you can then check. It does that I’m not advocating. Okay.

Speaker5: So.

Mark Anderson: So here’s here’s right. So take hold on to that statement and apply it to the fact that something does good summaries. Those those two things cannot both be true.

Frode Hegland: Yes, of course they can.

Mark Anderson: Well, not if it’s making Not if it’s making stuff up.

Speaker5: No, but. But more just.

Frode Hegland: Okay. Have you tried it? Okay, tell you what. Let’s try it together. Because what I’m going to do is open up a mark Anderson paper and do some.

Mark Anderson: You can take my current paper and summarize it. I mean, you know, the thing is and it’s really difficult. The trouble is painted into a corner. I’m not being absolutist about it, but I have I have a genuine interest. I mean, I want these things to work. I like them to get better. But part of the making them better is to understand the way in which they fail. It’s the way we train as humans. We learn by the errors we make because often we don’t understand why we’re making them. And our our reflection on the errors we make teaches us ways to avoid that, either because they better understanding of the subject or or of the most rubric level. We know things to avoid doing. Um, and I’m sorry if I use the word make stuff up because I’m not trying to put a moral edge into it or something. Right. Okay. But at the end of the day, it comes down to trust.

Frode Hegland: You can read the text, right?

Mark Anderson: Yeah, it’s just. Hold on a second. I just need to move a message out of the way.

Speaker5: Um.

Mark Anderson: No, I’m not. That doesn’t really seem to, if I’m honest, as I wrote the paper.

Frode Hegland: It discusses the evolution and usage of hypertext systems.

Mark Anderson: Yes, Well, that’s it doesn’t criticize the proliferation of commercial digital formats. I mean, an immediate linear sentence to. No, I mean, I would tell the person that wrote this summary to go and read the paper.

Speaker5: And again, the.

Mark Anderson: Difficult thing is we seem to be being sucked into a sort of zero sum argument. And that’s not really what I’m interested in here. I am sort of interested in how we move to a point where we have greater trust. I mean, I think this stuff is really impressive. Super cool. I just like it to be more accurate. And part of that probably is to have some sense of provenance built into it for it to for it to acquire the ability to understand how it.

Speaker5: Provenance.

Mark Anderson: Rather than rolling the magic eight ball on co-occurrence of words.

Frode Hegland: Okay, so here’s I’ve asked it to do something else. I asked about the key points. The provenance is your paper. But number two is pretty good.

Speaker5: Uh.

Mark Anderson: Yeah, but I mean, I think even a child would get that out of the paper, so I’m not that. So I’m.

Speaker5: Being honest.

Frode Hegland: Mark. That is the whole point. Even a child would get out of the paper if it’s read the paper. Right. So you can’t go from one side saying it’s wrong to saying it’s correct. Well, no, The.

Mark Anderson: Title is called Seven Hypertext. I mean, I’m just saying there’s nothing particularly profound in it finding that. So I’m I’m yeah, I mean, I’m. It’s good that it didn’t get it wrong.

Speaker5: Hang on. Hang on.

Frode Hegland: A second. The only place where you say what the seven hypertexts are in the list is really here, right? And it’s written in plain language. It’s not, you.

Speaker5: Know, it’s written in.

Mark Anderson: Several places in there.

Speaker5: Yes, But, you know, forgive me.

Mark Anderson: I wrote the paper. I mean, I didn’t, you know.

Frode Hegland: Yeah. Hang on. A. But. Okay, so you think that this one is wrong?

Speaker5: It’s not. It’s not.

Mark Anderson: No, don’t use the word wrong. This isn’t a zero sum argument. What I’m.

Speaker5: Saying.

Mark Anderson: First one I didn’t find to be so useful or convincing. The second one I didn’t finish reading. It certainly seemed more useful.

Speaker5: Um.

Mark Anderson: Was it the. Was it a good precis of it? Well, I’m. I’m unsure. I mean, but then again, these papers are broadly written in the first instance for people who have some understanding in the field. I mean, you know, you wouldn’t take a paper like that and just give it to person on the top of the platinum bus and expect them to necessarily understand or engage with it. You might write an A sort of abstracted or.

Speaker5: Okay.

Frode Hegland: The discussion is going in really weird areas because what I was hoping we would do is try to say AI is a thing. How can we use it right? What’s good and what’s not good If it suddenly is too easily readable? Versus it isn’t, you know, it’s just like to get a little bit beyond that discussion anyway, Leon, please.

Leon Van Kammen: Yeah. Was when I was looking at this summary, I was thinking it would be funny if it would basically already sort of predict what was about to happen, that it would say the author does not agree with this summary and then somehow it would be great if somehow there would be some kind of validation going on with the author that, for example, the later this summary could get a sort of like a check check box or something. Yeah, this, this is agreeable or agreed by, by the author. I don’t know how that would work, how Mark would get this this summary and, and validate it, but that would be interesting if you would have like more validation on whatever is being summarized of an article which has been written by a person of whom metadata is already in the document and maybe with some contact info, some kind of validation channel in the document. But just that was just a random thought.

Frode Hegland: No, it’s not random at all. It’s very this is something that, believe it or not, Mark and I actually talked about last week here, it says I generated. But if the user goes in and edits it, it’s no longer saying I generated so so to say to have what you’re saying that a validation or a no longer just AI but assisted these things are really important part of the discussion for sure.

Speaker5: Yes, Mark. I mean.

Mark Anderson: Out of interest, because if I’m listening at this point, in fact, I. David, Mike, Dave, my co-author actually, I know, did some summarization, um, using, I guess, CBT or something similar while we were authoring because that’s why I made the point in the chat, in the chat at the side about human in the loop. I think there, I think it’s useful because I think it does. There are two really useful things it can do. It can tell you it can do a first pass of did what you write that does what you write come across sort of correctly?

Speaker5: Um.

Mark Anderson: And that can be useful. In other words, actually, why are you writing about what you thought you are? And in a more, in a perhaps in a more directed sense, it can be useful for helping us with the fact that increasingly some of the primary readers of what we what we write will not be human. So another tailback is a slightly different one is what does a computer think or what does an AI, what does an AI read into it? And that’s actually quite important because in a sense, if there’s a disconnect there, then there’s that’s the beginning of your garbage in garbage out cycle because these things feed on themselves.

Speaker5: Yeah.

Mark Anderson: Now unintentionally but, but in, in essence so being able to there’s a really useful thing. And I think one of the things that may come out of it is we may find not in every not in every context in which we write, but in some contexts we may find ourselves needing to write in a slightly different way. Not or some of us may need to, some may not, in order to make sure that the non-human actors who will interact with the effectively the text of what we write will actually get the correct sort of understanding from what we write.

Frode Hegland: Yes, super important. So you will remember that we have this edition of the Future of text three that we’re experimenting with, where on the left we have a summary. On the right, we have the first page of the article and of course, you can click to open the original. I think you’ve all seen that. So that’s cool and it certainly helps. But these books that we’re producing together are just absolutely massive. So I do think summaries are cool. Key points are cool, but it’s I think it is really just the beginning. What I want to see is more of Brandel magic carpets. Right. If we can figure out ways to do prompts from either a set of articles in one of our books or in a lot of research articles which are useful to the to the reader, and then to be able to present them as more of a landscape. Because one thing Mark discovered, as he’s told me, pdf articles from a from ACM, for instance, have a lot of nonce keywords meaning keywords only used in that article. Other keywords tend to be more general, so to generate useful keywords is hard. And it may or may not work with this. So if we you know, so the whole thing, we want to be able to work in a huge VR kind of space somehow, right? At least part of the time. What the hell do we show? Do we show the keywords up there? Do we show articles? Do we show authors? Do we show connections? These are really difficult issues.

Frode Hegland: And can we use colors and shapes to build this? I mean, I want to have a system whereby, first of all, one thing that’s actually a bit difficult with ACM is to download all the PDFs from a from a conference. It tends to be one each, which is a bit loopy. I’m sure we could do a script, but the point is, imagine if I can engineer with an AI to do the analysis that we’re discussing, but to have it every time there’s a new paper, go back and do all of them again and see if there’s a contradiction or an addition. You know, this is when it becomes becomes interesting. If there is a change, how does it tell me just by boring old text? I mean, I’m the future of text guy, but I think texts can be extremely limited for things like this. You know, imagine if, you know, remember fax, paper. It gets yellow when it gets old. Imagine if we have a pile of information where something is burning hot because there are other papers saying this is rubbish. Now, you know, these are amazing things, but they’re only the tip of the iceberg for how we can think with it. Sorry. Over Mark.

Speaker5: Oh, you know, I was just.

Mark Anderson: I was just thinking on this thing of sort of keywords. I mean, so there are two ways that this can come in. One is doing sort of, I suppose, what you call term extraction, and that can be useful. But again, I think it wants to be done in a way that sort of Ben Shneiderman talks about a lot. It wants to be done human in the loop. So rather than the difficulty I think is at the moment that maybe it’s just I don’t know what it is culturally or just human laziness or something. We get so seduced by getting any sort of an answer, especially one that looks so, so sort of not artificial that we rush to. We step past all the other things that we’d normally do in terms of reflection upon it, whether it’s stuff we’re making ourselves or what we take from others. So and yet I think they can be tremendously useful. I think term extraction is an interesting one. I mean, the, the project I’m involved in at the moment, which is sort of starts out from the, the wonderful thing of saying can’t we get more reuse from the data of our research? In a sense the it’s a public good that’s been paid for. Why aren’t we getting more out of it? But it’s really interesting. We sit on the side of the conversation.

Mark Anderson: Everyone’s basically saying, Why can’t I ask Siri to show me all the cool stuff? And yet at the same time everyone assumes Siri to work out what the cool stuff is because we’re too busy to actually describe the work that we do. So you have a sort of closed loop problem and certainly using using some of the AI as suddenly when authoring, as a sort of as a co-pilot, I think is remarkably useful. But I think just as you need to, you need to be trained to learn to fly an aircraft. I mean, you may learn to do it on Microsoft Pilot or whatever that that, that I don’t have a problem with. But you do need to be trained. I don’t think many people can just get in an aircraft and not, you know, get up and down without any bad things happen. And I think one of the things we’re perhaps overlooking is because it’s so easy to get something that looks like an answer, We we we we don’t stop to think, okay, how would I best use this? How would I build this? How would I make this really help me? And I know that in fairness, I know that you’re doing it. And the conversations we’ve had about the way you’re putting it into reader and I, which I think are considered and useful.

Frode Hegland: So you use the term terms, which is really interesting. So if you guys have a look at the screen now, what do you think of this? Does this cover the important terms in your paper, Mark, do you think?

Mark Anderson: Um, well what it’s basically done is taken some of the chat, not really because it’s taken the chapter headings and regurgitated them. Um, and that’s the most sort of, that’s the simplest form of key wording, but that’s a bit like doing it on word frequency. Um, that is not difficult. Where I find it more interesting is when, when this sort of a system actually comes back with a term that you didn’t explicitly state or it might, you know, using another word and it sort of can happen. I mean, that’s what we want to shade towards because one of the one of the problems with the keywords, why they’re not useful at the moment in many articles is people do it at 1159 just as they’re submitting a paper. Just I’m making a point um, is that they think, oh well this roughly describes what I’m doing. Actually, what we need to be doing when we’re creating things like keywords and tags is really what we should be asking is what would somebody, what term would somebody use to find the thing that I’m writing about? And almost in many cases it’s almost a certainty that it won’t be the term that I’d use or it won’t be phrased in the way I phrase it. And that’s one of that’s one of the areas where I really do see that. Um. These systems might help, but it actually requires some testing because at the moment you can keep typing things into the magic eight ball and you will get an answer out because it’s almost a fundamental requirement that the Magic eight ball will give you an answer.

Frode Hegland: Mark. I completely understand the argument that these things are not flawless. I completely accept that. I’m just wondering in conversation, what is more or what is less useful? So, I mean, existentialism is not a heading in the document, So. But I take your point. But if we just ask it for keywords, that may be the kind of thing that gave us 22. But if we do that to a whole bunch of documents to see which ones have these in common, it might be interesting.

Mark Anderson: The interesting thing, for instance, for this, it’s also a problem because our search systems are very immature. It’s actually really difficult. The sort of keywords that it pulled out there are perfectly valid. They probably wouldn’t help you find the thing that you wanted in relation to that paper. Those terms alone, because they’re effectively subjects. So what what Google will give you back is it will give you back the subject. A thing that would be really useful actually is if, if the, um, the AI system actually did give you the one of the most useful things it could do is give you the actual reference, because a lot of these things are backed by a sort of a reference that is at least a in a peer reviewed sense. People have said, well, this is this is broadly considered to be the source of this or that or the general statement that describes it. Um, and that’s something that it seems that I do find that systems seem to be bad at. And I don’t quite understand why because they have all that information in their corpus in there.

Speaker5: Yeah, but.

Frode Hegland: If we sorry, I stopped sharing. But if we do something like this, right, just technically. So we have a look at this. Let’s say this is in the system. If I now pqm. Okay. I want to know where that’s in the document. I can do a search for PQM and it does show up where it is in the document.

Speaker5: You know, some of.

Frode Hegland: The things that our keywords tends to be single or double words, but the ones that are longer, of course, is something else.

Speaker5: Yeah, I.

Mark Anderson: Mean, for the simplest level of learning, yes. But I mean, is that really how you learn on a more complex subject? You actually read the whole thing through because the terms themselves are only so a hand holds within it because it’s the narrative.

Speaker5: But this is.

Frode Hegland: What I’m hoping. This is what I’m hoping we can discuss, right? Some of these things work. At what aspect of learning, at what aspect of doing are we? Are we addressing?

Brandel Zachernuk: Yeah. Did you watch, um, Craig Federighi talking about this at Wwdc with John Gruber? Really, really great.

Speaker5: Yeah.

Brandel Zachernuk: Um, he, uh, he talks about how at Wwdc, Apple made the very conscious decision never to use the term AI and talked about transformer models in the context they were transformer models, was talking about some of the and machine learning where, where it sort of understood machine learning to be a pertinent thing sort of term for it. Um, I do think that so so, so I also don’t think that rejecting the premise of AI per se necessarily means rejecting all of the things that I, as many people characterize it, do. Like, you know, you said I use AI all the time. Um, and actually, so the photos is a reasonable example in that I like taking better photos, but I benefit. I’m definitely the beneficiary of lazier photos that are that are good enough for every single situation and not having to do white balance and all that kind of stuff. But also a massive place where AI comes in is maps. It’s not just, you know, a star or any of these other algorithms, but it it’s not just because of the cool merch. I don’t have a baseball cap I got recently, but um, that I get from Apple but that I actually wholeheartedly sort of agree with the approach that it takes where like AI is able to separate the your cute puppies face from the background to create a sticker and messages. And that’s not simply relegating it to the sort of an unimportant task, but to be able to draw a really clear boundary around what it’s doing, why it’s doing it, and how it kind of integrates with an overall workflow. And I’m a lot more bullish on that as a general sort of approach towards things. You’re saying we need more magic carpets? And I was like, Gosh, yes, it has been a while.

Brandel Zachernuk: How would I do another one and what would I get AI to do for me with it? And, you know, one of the first things is you’re talking about keyword extraction. I would if certainly if there were any PDFs that didn’t have good quality, uh, actual Ascii or Unicode type backing them, I would need to take that through optical character recognition. And, you know, many people would consider that to be AI if they weren’t correctly formatted or if there were line breaks in a weird place, because then I might attempt to use AI of a kind to be able to do that, and I might even, you know, trust it with assembling some kind of, you know, uh, term frequency, inverse document frequency thing or making a, making an assessment of the overall validity of that tf-idf algorithm over a corpus of documents in order to be able to say like, well, did these ones really well? But once I get back to about 90, you know, 2005 to 1992, then these PDFs are iffy enough that you probably want to take a look at them because they don’t the results don’t look good. Um. And I guess I would be comfortable calling a lot of those tasks. But I one, I don’t know how to do those things with AI and to actually, I don’t know that enough of the people who use AI and who are sort of boosters for AI are using it in a way that gives me enough of a signal about how to approach it. And given the given that cliff of of utility and information about it, it’s almost certainly going to be easier for me to start on my own and build it myself.

Speaker5: Yeah.

Frode Hegland: Leon, please.

Leon Van Kammen: Yeah, talking about building building myself, I was actually also realizing that we were just looking at a at this pals and we were basically looking at a PDF document which then through this application triggered server clusters with all these language models etcetera, which is sort of bizarre. And I was thinking taking a step back, it’s actually maybe it also shows a limitation of the author. I think most most authors don’t have the time to basically add all these, um, um, questions or answers to questions which students might have to the paper, for example. Usually it’s a summary or keywords or a very dense or summary, and these are not in those papers, usually in big documents when when it’s finished, they, they will not add all kinds of size summaries to it. But yeah, I was just thinking like in a, in an ideal way, if these if pals is sort of a funny concept, uh, it might indicate that the documents in the future should be, should have much more included. And um, so, yeah, I could even imagine that at some point while everybody’s playing with these things and students, then there is maybe a top five use cases, which they always do. They ask for a summary, they ask for the keywords and maybe some other things and think maybe I can be used to sort of embed them into the documents so that later you don’t need a special, uh, application to trigger these language models to do it on the fly. I could imagine something like that. I think now everybody thinks that we will use these language models to summarize things for us. While I think at some point it will be, uh, a very cumbersome, boring, clunky way of getting summaries because maybe we just need better PDFs with all these things inside, like basically what Frode is doing with the editor, basically showing things and hiding things based on popular demand. So that’s just an idea.

Frode Hegland: Putting more stuff in in PDFs and on documents in general as a kind of a ground truth, at least ground truth as presented by the source. Not that there is an objective truth. Yes, absolutely for that. But. The thing is. You know, it’s a it’s an imperfect tool.

Speaker5: Uh.

Frode Hegland: Apple notes improve their PDF capabilities needs. Reader Yes. Reader Yes. Um. It would be fun if we can start just experimenting. What we need, but it’s really hard to ask the question. To answer the question Why? Right. The big issue I have in the library. I see your hand. This will be real quick. And the library here, you know, what’s the point of, you know. So currently all it does, it has all the highlighted text. Anything you highlight you can search for in the library, which itself is useful. But if we go further and do the thing I just showed you before include this, why is it to stop you reading? Is it to help you decide what to read? Is it to help you see connections that would really like to have more discussions on that? Mark. Oh, no, no. You’re muted, which. Yeah.

Mark Anderson: So your question set me thinking, but the point I was going to mention is that I mean, I certainly think and I’ve felt sort of some well, now that certainly in the in the sort of scientific the more empirical end of our communications that we need to break out of the the sort of straitjacket of our old style writing of sort of publishing papers. The papers should just be a report that is built from the data underneath. So in other words, if you don’t have the data underneath, you can’t write the paper, whereas at the moment it’s the other way round and people will tell you, you will ask about something and say, Well, it’s in the paper when it isn’t, it’s in the mind of the person that wrote the paper. It’s not in the paper itself. And actually, so building off data, I think that would be much harder to do in some of the humanities where things are much more conjectural and argumentative, not in the fighty sense, but in the making of a of an intellectual argument. And I’m not so sure it fits so well there.

Speaker5: Yeah, I.

Brandel Zachernuk: Mean, to think about, like, the jump in. Uh, in thinking that came with, you know, Playfair and Boyle, um, around things like the natural sciences and economics people building the, you know, like Edward Tufte whole shtick of the original sort of data visualizations and things like that. Um, I think that those things are amenable to, to explicitly quantifiable dimensions in a way that has largely escaped people within software. Um, or yeah, like. Things that people don’t have the same familiarity with, measurability and quantification. And I think that there are, you know, like. All models are wrong. Some are useful. Like think it’s just that that computational tools are not sort of immediately within reach of the people who tend to do that kind of social philosophical work in the academic sphere. Rather than that it’s intrinsically more difficult to capture modes of that contain some value. I mean, maybe that is the same thing, just that the people who are doing it don’t think that way and it’s harder. But yeah, like I think there’s a lot that can be done once people have the ability to compose models that that have these things and to have the multi-modality of the carpet and other things like that, that, that, that give them a way of being able to kind of easily transpose those things.

Speaker5: Yes.

Frode Hegland: Yeah. Yeah. And I’m so glad I see Alan’s hand coming.

Speaker5: Up because Brandel, that’s.

Really the point. There are so many different users use cases and levels to this and just walking this morning. Sorry Alan, I’ll be real quick. I was talking about how to teach maths to Edgar. I don’t know that much maths, but there are certain things like fractions and percentages. I just want them to visualize in his head as the key things because the next thing, next thing with maths is maths is a creative medium, you know, what do higher level mathematicians do? They invent stuff. They don’t just follow brute things. And that is something that Alan has talked about a lot, you know, the intuitions and getting to that level. So I think what would be really, really good in our community, let’s say that maybe next Monday we have, Oh yeah, I’ll be traveling, but I should be okay anyway. Maybe what we try to do is have some discussions on the average student learning something, you know, the average person going through some fine. That’s the thing. But what you guys are talking about or what I hope Alan will talk about more now, the whole thing of I don’t know what I’m looking for.

Frode Hegland: I am here to learn. How can I build things to make that happen I think would be incredible. Alan Sorry for talking so long.

Alan Laidlaw: No problem. Um, this is, uh, touches on that, um. Who’s familiar with Hilbert’s 23 problems.

Speaker5: Okay.

Frode Hegland: Come here and embarrass us. Sure, Why not? Only Brandel. I’m guessing.

Alan Laidlaw: Peter. Not a surprise. Um.

Frode Hegland: Of course. Peter. Yes.

Alan Laidlaw: In 1900, David Hilbert, the. The one person on the planet that von Neumann said was a genius, um, presented these 23 math problems that were, uh, very difficult and not solvable in the traditional sense. It was more about, are these are these problems provable or disprovable? And if so, prove that. Um, many of them still haven’t been answered. The ones that have been answered wound up launching whole new kinds of math, uh, girdles and completeness theory came from that, uh, uh, Poincare, um, torus shaped universe stuff came from that. Anyway, I bring that up as a. A spiritual guide to what I’m about to bring up. I’ve been thinking along the lines of a 23 questions for. For our our text, our digital based life. Right. What are like. Ken 23 questions be formed and and improved over time that are useful that actually solve or address specific problems and may be possible or may not be. You know, I’ll give you a few random examples. And obviously some of the things we’ve talked about in this call so far would fall right into them. But like a random example might be. I want to to be able to write on my computer and have the the lines write in much the way I write in a journal where I don’t have to follow the lines that are prescribed that I can just the words can twist around where however I want. Right. That’s a that’s a problem that I experience.

Alan Laidlaw: I want to be able to have a spatial control over my digital text. And maybe a caveat would be, but I don’t want it to be in a canvas. Okay, so then what’s the problem with canvas? Okay, let’s define that and try and flesh that out and then from that process come up with a kind of a problem that is sufficiently complete that says like this is this is the interaction I want to have. This is the freedom I want to have. Um. And like a lot of the problems that Hilbert brought up, they weren’t applied math problems. They weren’t practical. But they went on being useful later on. Primarily they were useful saying, Hey, here’s a different way to think, right? Um, so another, so many of those, right? Like when you just use the word creative in reference to math. That was really interesting because I was I was in a different space where I would think that like creating art would be creative and math would be learning formalisms, right? So here we have this problem all of a sudden of this word that with, you know, its meaning is actually more dependent on your view of the world at that moment versus my view of the world at that moment. Right. So a problem might be and this again would be really difficult to this is the hard part is to figure out how to way to explain this idea.

Alan Laidlaw: Is it possible to create words that it can maintain the context of the author, you know, while they’re using that word such that that context is transmissible to another person reading it? Um. And and then, you know, like one answer for that is, yeah, just read the freaking article or the context around whatever a person said, you know, put more effort into describing what you mean. Um, but is there, is there another way to get at that? And, and if you can’t, does that imply limitations on fundamentally what computers can do around search and recall? Right. Like if there are limitations to understanding one’s internal nature. That requires a certain amount of effort by another party or some other kind of thing. Does that mean that this is an inherent limitation in this meta medium that we’re dealing with and that it’s kind of like not worth a whole lot of effort to try and solve? Um. Anyway. So yeah, bring that up. It’s like it’s a prompt that I’d love to get ideas or talk about it more. Like what? What would 23 questions be like? Right? Like they could be as simple as I want to be able to highlight text the same text multiple times. Right. Or um, which seems trivial. That’s, that’s it probably doesn’t rank as a 23 problem kind of problem, but maybe there’s something underneath that that is suggestive that could be brought to the surface. Right.

Speaker5: Anyway.

Frode Hegland: Can you name 1 or 2 of those 23 problems, please?

Alan Laidlaw: Hilbert’s or mine?

Speaker5: Hilbert’s is. The original Gilberts.

Alan Laidlaw: Yeah. Let’s see. Continuum Hypothesis. That is about the nature of infinities. Are all infinities the same size, or are there infinities of different size? Right. The real famous one is resulted in Gödel’s incompleteness theorem, which had to do with essentially. Um.

Speaker5: Okay.

Alan Laidlaw: Is mathematics logically complete? Is it logically consistent? And he proved that it is not logically consistent. So you can still do a whole lot with math, but at some point it relies on a kind of a human level agreement with what these symbols mean, right? Um, that was a pretty famous one. Um. Let’s see. Uh, this one. The finiteness of certain systems of functions. The motivation for Hilbert’s 14th problem came from previous work he’d done showing algebraic structures called rings. Um.

Frode Hegland: I mean, this is a great provocation to try to do the same for texts. Inadequate numbers. Can we do it as a community for text?

Alan Laidlaw: I would love to see that. I think it would be interesting because I feel like it would cast a light on how reliant we are on the engineer’s mindset or on formal systems, right? Like obviously computers are the result of formal logic and they are great when they’re used for formal logic problems. But the way that we use them today as life augment enhancers, as social tissue doesn’t fall within the domain of logic necessarily. And so it would be great to present enough bits of evidence to be like, Hey, maybe it’s time to rethink how we approach the substrate, the, the, the the unspoken parts of software.

Frode Hegland: I think this is very, very important. One of the things I did at the future of text in California a few years ago was break into groups and one of the groups was tasked with just writing down or listing different kinds of links or connections. You know, these were really clever, clever Silicon Valley people and they have no idea that was scary, right? So can we do that now? Can we talk about. The one aspect being the connectedness of text. How many types of connections are there possible? Sorry. There you go. How many?

Alan Laidlaw: Yeah, I think what’s interesting about the.

Speaker5: Prompt.

Frode Hegland: Emily’s had a haircut. Everyone has to have a look and give her a pause.

Speaker5: Hello.

Speaker8: Hi.

Speaker5: That’s lovely. Hey, Emily. Some of them.

Frode Hegland: Will take I.

Speaker9: I have to go. I’m going.

Frode Hegland: To. Okay. Sorry. Yeah. Sorry about.

Speaker5: The.

Alan Laidlaw: The interesting part of the prompt of of framing it as a problem is that it’s. It’s not as much as, like, imagine all the different ways. It’s kind of more around like, here is a problem that we experienced that you may not even it may not even be a problem. You realize you experience, right? Like. But here is a problem, you know, and then and not have to be obviously like here’s a solution but to to bring it bring it to the community and ask people to try and solve it, you know.

Frode Hegland: Yeah, but even. Even just go back to these first principles. It is really provocative. What what you brought up is very good. Um, you know, it’s so easy for me to have a lot of coffee. Wake up early in the morning and write it out myself. But that’ll just be my brain. You know, it’ll be much more interesting if we try to come together with a very different ways of looking at it. And maybe. Maybe, for instance, one of us wants to write a subsection of I’m not even going to suggest it something or other. But to.

Speaker5: You know.

Frode Hegland: What is the future of Tex? The future of text. One of the things that is is exploring the limitations and limits to different things of text and to know the limitations of text. We need to understand the attributes of text. And we kind of go in and out of trying to look at that versus practical things. And then we go into social things. And of course, social things are completely intertwined with text because text in and of itself. If it doesn’t go in a brain, that’s not very useful. Text, right?

Alan Laidlaw: Yeah, I think a lot of the 23 problem kind of thing is, is also very related to what TED Nelson did. Like I think a lot of his work could be defined as as a problems that he’s noticed. Anyway, sorry. Go ahead, Mark.

Mark Anderson: Now, I was reflecting on what you said and one of the things that you said about the more I think about it is that as something we’ve done quite unintentionally is we’ve sort of basically tried to computerize, print, not text. But as constrained by the media and print for very obvious reasons, because it was the technology at the time and it seemed a thing. And I think a lot of the things we’re tripping over now prove that actually an unforced error early on was us basically trying to digitize bits of paper. It’s not that they’re wrong and heaven forbid that books go away because I love them. But I just think at a deep level that one of the one of the problem areas is that we perhaps never stopped to look at how how to what extent are our sorry, our centuries of of experience with print. Digital print, because if you look back, you know, people have written greatly about how the arrival of print changed the way we wrote, and we haven’t perhaps looked at that. We’ve just, you know, things have moved so fast. We haven’t had the moment for that reflection anyway, passing thought.

Frode Hegland: I think it’s something that I violently, strongly agree with. I hear it often Print, print, print, print is of course, not the only analog medium, but as Daryl Allen keeps referring to, there is such a thing as writing on paper. You know, the whole manuscript thing is lost. Absolutely. And we can even go back to cuneiform, not as a joke, but as an you know, that was 3D text, for crying out loud. You know, it had attributes of, you know, where you choose to write on this little thing. So, yes, we should absolutely not throw away the history, the fact that we’ve been recapitulating the rectangle and digital. Yes, but there is more. Sorry. Very exciting. Shush for a bit. Please. Go on. Alan, first, please.

Alan Laidlaw: This is just a small, quick point, but in line with with that like. Um, another thing I’ve been thinking about is, um. Or that may feed into this. I think we have we as a culture are inspired by science and everything that science has done. And we assume that science is progress because here we can see like we learn more about the world, we have a better explanation about the universe. That’s awesome. The problem happens, I think, is that we assume that technology is related to science and has the same qualities. So if you have an improvement in technology, it is progress in the same way that coming up with a new scientific theory is progress. And so we we don’t very often reflect or take a therapeutic approach to what we have. It’s just always, Oh, this new thing we have is obviously progress the way that like kind of evolution. So let’s keep building on it. And there isn’t quite a discipline for, uh, well, first off, realizing that technology really has nothing to do with science. I mean, I’m happy to be misinformed, but. Uh, and thinking of it differently, thinking of it as perhaps even closer to ethics, you know, that it’s a thing that’s just cyclical or Alan Kay’s fashion anyway.

Speaker5: That’s it.

Frode Hegland: Of course, it has something to do with science. It’s built on science. But yeah, a really important point. Thank you, Brandel.

Brandel Zachernuk: Um, I can’t remember exactly why I wanted to give this spiel, but I’m pretty sure it’s relevant. Um, so I teach a lot of people programming, and sometimes those people think they already know programming, and so I need to disabuse them of that. Um, and one of the, one of the sort of interesting exercises is to say what, what is the appropriate representation in a program for money. And like, like, is it a is it an integer, is it a floating point value? Because actually those are different. And if you need to have fractions of cents, those are different. But also like you can have money represented as a boolean of just yes or no, like does somebody have money or do they not? Um, in the sense of having something like a bank account. But there’s also, depending on what you need to do with money in your, in your context, you might actually want to have something like the, um, the, the currency or the condition of the money if it’s actually from money collection coin swapping or if it’s for float within a within a cash register, then you might want to know that you have so many 20 cent pieces versus dollar dollar notes or dollar coins. Um, and so the something that is objective is still highly subjective based, like whether what somebody has in terms of money, um, has a, like the specific context of it is, is deeply relevant and wildly divergent depending on those contexts.

Brandel Zachernuk: Um and. It’s in that same vein that there’s not a an adequate level of nuance for people to understand things like connections, and you need to go through similar sort of what I tend to do in answer to that question of like how can two things be related to each other is sort of berate them with a couple of different instances where like the representation and the nuances of that representation can be wildly divergent depending on the context. Um. And. I think it’s instructive in the context of building a sort of formalized systems around something. My daughter is waving outside because there’s, uh, it’s really clear that even when you have something as sort of hard edged and measurable as money, that that you have to sort of work back from what it is you intend to do with it. And there’s no such thing as a sort of a context free, appropriate representation because nobody cares about all of the surface details on your £20 note unless you’re in the business of swapping those £20 notes, in which case it becomes the most important thing in the world. So, yeah, um, I don’t know. Sorry.

Speaker5: That’s. That’s totally relevant.

Alan Laidlaw: It reminds me there’s a fantastic book. All data are local that is in keeping with that theme very much. And it’s honestly, it’s like a Martin Luther level point because it’s basically saying the way that we treat data right now is a foolish.

Frode Hegland: This is where it gets phenomenally interesting because any word has to refer to other words to have any meaning, obviously. And how much? Doug had two types of links, as I’m sure half of you at least know. Explicit and implicit. A word is implicitly linked to its entry in the dictionary. For instance, you know, we call it search now or whatever. And then, of course, there are many types of explicit links. And this is where metadata gets very, very interesting. Of course, Brandel. And this is something that I want to be able to capture, but I don’t want to go down the route of manually adding metadata because it gets expensive and boring. But I think that in rich environments such as virtual environments, if we choose to package stuff together, you know, this is what I want to have included with this stuff and just let that bit happen automatically. Like, you know, example in author you export with a with a heading. It knows it’s a table of contents to be generated. It’s free, that kind of stuff. So with the money stuff it’s a really good example. Sorry for rambling, Leon, please take over.

Leon Van Kammen: Yeah, very interesting. I was also thinking, uh, when when we look through the lens of AI solving problems of humans, then I could also imagine that actually the, the humans, that is sort of a human problem that, um, students are going to summarize texts. Um, and to me, it almost feels a bit that we are in a certain culture, a certain, uh, um, uh, sort of uh, mindset, almost a tunnel vision where just like Brandel said, like we don’t want to be misunderstood. Like if we, if an economy professor writes a paper about, uh, unfair, um, uh, inequality, blah, blah, blah, and a summary would then mention people with who have lots of money, there’s lots of money would immediately trigger all kinds of, um, you know, this professor who wrote that article would immediately be triggered like, Oh, this is not okay. I don’t want to be misunderstood. And I think the, um, the yeah, there is a bit of a difficulty with text really empowers everybody. Everybody can read it, everybody can, you know, have an AI summarize it, but the writers are sort of maybe, uh, easily, um, triggered.

Alan Laidlaw: Or they are.

Leon Van Kammen: Maybe it’s also a status thing. They don’t want their texts to be simplified or misrepresented and think this this is a bit of a um, it’s also adding to the problem of that, you know, maybe these simple summaries will never be embedded in the papers because people have a tendency to go for the highest precision in their papers as if it’s as if it can be attacked from from any moment. So I also see some issues there that we we ourselves are not really ready. You know, we we every writer doesn’t really like to, to read summaries because there’s a lot to disagree with immediately. So that’s just my some thoughts I had.

Alan Laidlaw: Yeah, the problem gets really deep, really fast. It’s pretty wild. I mean, like. Sometimes some some of the issues that brings up is how to know how much attention to allocate to whatever you’re reading, right? There are things that are skimmable and there are things that seem skimmable because the words are simple, but you actually have to read every single word to get a sense of of what it means, Right? And sometimes you don’t you don’t know that. Sometimes you don’t even have the energy to. Apply your attention that way. So that’s a great point. There’s there’s a it goes back to some ancient philosophy, right? You never you never walk in the same river twice. So if if printed text, if text committed to some sort of artifact is put down, that’s still very much like a river, you know.

Frode Hegland: So this is this is really exciting. I talked to someone recently about this. And one of the things he talked about was using the visual meta model, not just for one thing, but just like what Peter has said before, addendums to addendums. So for instance, at the point of generating the downloaded PDF include several different types of summaries and all of that stuff in clear categories so it can be used like we’ve talked about, but make it clear when it was done and then the end user software can choose whether to have this updated periodically or whatever. And when it is, maybe compare how was this analyzed? And this year rather than ten years later with the same technology. So the idea of metadata added either carefully from the source or entirely arbitrarily with I as long as you know why or how it was added, it can become a really interesting resource. I mean, one thing that would be really great and that I want more of is the ability to use our own data sets for these things. I would really, really love to take all our years of transcripts into an LLM thing. So this is our background. Use the internet if you want, but that’s our background. So every time I do a transcript, if there is a noted change of tone. Or if the keywords are now used in a different way, tell me it may get a completely wrong. That’s okay. But it’s yet another way that we can look at our own stuff, right?

Speaker5: Yeah.

Frode Hegland: I’m talking to you guys. Often I go into Lightroom because doing image stuff to me is like playing with Lego or whatever, you know? It’s relaxing, but it’s useful and it’s powerful. Shouldn’t we have the same kind of nice tools to play with these type of textural things? Mark and Brandel fight. See whose first?

Brandel Zachernuk: First.

Speaker5: Okay.

Mark Anderson: I was just quickly reflecting. I was listening to everything going by and reflecting on the fact that one of the problems with summarization is it goes back to this fact of, well, what what what is the reader doing with this? Am I summarizing this sort of in a sense so I can come back and know what I read it before? Am I looking for an insightful summary of something I don’t know about or should know about? And they’re all subtly different in the sense that if we if we if we set out to do this deliberately and doing a summary, we might we might structure it slightly differently. So the fact that we’d have to stop and think about it is useful in terms of before we throw too many brickbats at the poor old software because we’re sort of expecting it to cover all those things unintentionally, but we’re sort of expecting it to get the right answer without us almost necessarily signalling. And I don’t mean I don’t mean in the in the sort of in the pedestrian sense of the prompts we give it. But to really understand quite what what we wanted from it when we asked for the summary, I mean and I don’t think it’s a simple answer for the reasons stated in the sense that it’s often highly contextual to what you’re doing and Brandel.

Speaker5: Um.

Brandel Zachernuk: This is Galaxy brain and also very half baked, so bear with me. But I think it’s interesting and presumably people have thought of it before, but what I realized is that the problem with it is so my daughter is ten, which means that she is exploring some really snarky and and uncooperative modes of communication. And I stumbled across a video about grice’s maxims of communication and some of the sort of the various implications that communication tends to follow. Sam Altman was recently kind of surprised by how much hidden context tends to underpin person to person communication. And I was like, Uh, dude, yes. Um, I’m glad. I’m glad you’re here now. Wish you hadn’t done quite so much before, before getting here, but whatever. Um, so I think one of the big challenges for me with sort of so-called AI tools at this point is that they are misleading in terms of the epistemic modeling that we have the ability to do about the sort of the the boundaries of their experience and the sort of thing that we expect them to do as a consequence of the way they’re doing things and the interface they’re providing. So when you have an electric drill, you don’t have to understand everything about, you know, brushless motors or direct drive or AC or anything like that in order to be able to do it. But there’s a consistent epistemic sort of model that you have of what it is that drill is and what a drill does. Um, and when you, when you have slightly more elaborate tools like Microsoft Excel or um, was it like a number one, use it that frequently.

Brandel Zachernuk: Um, um, so you have epistemic models, whether you understand them perfectly or not, of what it is that it’s going to do, what you’re going to be able to trust. Um, the challenge with ChatGPT transformer models and things like that are twofold. One is that it’s dressed up in the sort of the trappings of what a human agent acts like and looks like. And that’s further complicated by the fact that a lot of ChatGPT has been substantially actually written by humans at various times. But the other thing is that it’s, it’s not a consistent form. So, um, if, if drills hadn’t existed in, in various forms for literally thousands of years before we electrified them, or if you somehow, like were able to live in a society where there hadn’t been drills and then suddenly drills came to be, then what it, what it is they can do and very definitely can’t do wouldn’t be nearly as visible. Um, and so it’s really hard to imagine the sort of the appropriate themes and the, the epistemic modeling of what you think it is that I can do in a context where one, it looks so much like exactly a person and we’re tripped up on both sides of that where we sort of have expectations that are much higher or much lower. It’s like, can you form sentences? Oh my God, that looks amazing. Or it’s not quite what I would have summarized this as. Um, because we don’t have a good, um, it’s not presented, it’s not packaged in a way that is. Good for giving us a. An appropriately sort of boxed context for apprehending the results, for being able to go like, oh, this is this.

Brandel Zachernuk: It exists in this form and I can have these expectations and and guarantees and guesses about it like you would a drill that only drills a half circle is most definitely wrong and and broken or a drill that doesn’t have the sort of the axis of rotation as as dead. Dead even is most definitely wrong. But like what is it that is definitely wrong about an LM generated result? How do you tell how can you understand this sort of the the characteristics of its operation and working? And part of that is that it’s it’s sort of actively trying to dress up like people. Um, but part of it is also that there is not an adequate, um. Continuity and consistency of that interface. And that’s that’s that’s that’s. Unfortunately, kind of just a function of time in the same way that, you know, a violin is a violin, not because violins are good, because they’re definitely not for people’s chins or necks, but because they are violins. And so there’s this there’s this really, really consistent model for apprehending what it is that a violin is and does and what you need to be able to do with it. Um, the to the, to the first question of like, is there a better representation that an can kind of present itself within? Um, I think yes, probably. That’s the magic carpet. That’s the other sort of constructions of, um, appropriately intersubjective representations. One of the reasons why I’m upset about. Sorry, I’m really going on, Auntie. Um, one of the things I think it’s worth.

Frode Hegland: Please continue.

Brandel Zachernuk: Going. As long as it. As long as it seems valid. Um, that one of the problems with the current sort of sort of crop of AI is that it resists these intersubjective kind of representations and we haven’t done enough introspection or development of tools that have that as a goal. And we’ve merely said if we promise there’s valid stuff on the inside and we promise that what we see is valid on the outside, then we can completely ignore any aspect of intersubjectivity. And you know, it’s not that it’s easy. It would be really hard to go like, do these hidden weights mean. Um, but I think that that’s an important kind of component of it rather than just sort of sticking a stick into the data again and stirring. But the second one is kind of a longer run one and it’s it’s largely a result of the past is what are the consistent forms that can give us the proper context for the apprehension of LLM based workflow. What is the right shape for AI? That means that we can understand what it is and more perhaps critically, what it isn’t. I don’t have any answers for either of those two things, but I just came to it and I was really excited to share it with you all.

Frode Hegland: Yeah, perfect. Leon, continue.

Alan Laidlaw: Yeah.

Leon Van Kammen: I really like like these thoughts. Brandel. I also, uh, it also started to think that I don’t know if, you know, this robot Asimo think it’s a robot from Japan. So I in, in, uh, looking through your lens, it almost feels a bit like this. These, the these LMS, these ChatGPT. They are sort of a. Yeah. Looking like a human or talking like a human. And it’s like, wow. But in the end, it’s perhaps a bit of a temporary thing. Like just like this Asimo robot, like everybody, it’s fun to watch. It gives you this wow factor, like, wow, we’re living in the future, but it’s not like that’s what people are using in in the last years. We’re using very, very specialized robot arms in a in a factory. They are actually they’re not really human. They are very far from it. And they do a specific task. So I could imagine, um, to for example, reader to be something which already goes beyond this point where it’s basically more hidden, the eye, the eye part. It doesn’t act like a person talking to you or typing to you. But yeah, so that’s the thoughts you triggered in me. That’s awesome.

Speaker5: Yeah. Yeah.

Brandel Zachernuk: And it’s not to undermine the legitimacy of Asimo or ChatGPT, but that, that Asimo is an exercise in going like, well, what is, what is locomotion and what can we learn about what similar see a lot of that stuff and people talking about fused deposition modeling, 3D printing or like well, all of the things you know about injection molding plastic are to do with heat shrinkage. So you don’t need that anymore. You can build things in these authentically different ways. And it’s kind of, um, skeuomorphism actually to go, well, we need to recapitulate these structures and these formations, these processes, but we need to use that primarily as a learning exercise to go like, what is this media, what is this material and what do we need to borrow and what do we need to throw away? Sorry. I’ll go ahead.

Speaker5: Yeah, that’s.

Frode Hegland: Yeah.

Speaker5: Okay.

Alan Laidlaw: Uh, thank you for, uh, the excellent points and, uh, touch touches on a lot of things that can only be themes that can’t be like they’re axioms. They’re not really formulas now, but Alexander Obernauer, who’s working on a kind of a rethink of the OS, um, mentioned an idea that love and matches with this and screws not glues or the the value of. Being able to have parts that you can unscrew and sort of take apart, dismantle versus the very Apple approach of glue, you know, like put everything together because you get higher gains out of that. I get that. But there’s a there’s a point at which we have to question optimization, right? There are things that are only possible through incredible precision specialization and optimization, like making superconductors that requires 26 different factories across the world doing very specific things that can’t be replicated. But then, you know, there. That opens us up to certain fragility. Um, and also there is a, there are kinds of problems that often get. Picked out by the Y Combinator crowd of being like, Oh, this is a common problem, let’s optimize it, let’s find a solution for it. And you find out that they don’t really the solutions don’t take off because. The brute force approach has an invisible attribute of freedom, of movement, of optionality. By doing the manual version, each time you can change up the rules, you know? And so there’s a, um, an unspoken trade off of discipline or whatever, or a procrastination that is part and parcel of any optimization. Wow, that’s a lot of nonsense words. Okay.

Frode Hegland: No, it’s not nonsense words. This is really important. This is so important because what happens when you optimize too much, you lose robustness. It’s simple, right? One of the things that Chris Gutteridge and Mark and I talked about a lot many years ago was how to connect documents. Right? You should have a link and you should have the citation information. You should have the file name. In other words, more if it’s important. Just do more, right? Optimization can lead to serious issues. It’s very important. Alan. Just look at our supply chains right now. You know, physical, real world ones. Yeah, but how does that relate to text? It relates to text. And, you know, in this conversation we’re kind of layering all kinds of stuff to see how it works. And then you come along and take the foundations away from that and says, you know, what are the foundational issues of text? Look, you know, like with mathematics, I think we should probably have a properly announced discussion on that, send out something to people. We also need to have your paper one, of course. Um, yeah, we should definitely try for that. I mean, one of the issues with this community is we hardly ever write to anything. You know. Can I help us with that? Now I’m going on the complete opposite side. So use thing for us. How can I help us get stuff down?

Speaker5: One of the funny things.

Alan Laidlaw: Going back to the thought experiment about enterprise LMS private you know the magic carpet that that could be is kind of why I’m interested in is let’s say that it happens. Let’s say it’s perfect. It answers the question perfectly well. Then you’re left with a different problem, which is everyone’s still using the same tools that got everyone into that siloed documentation to begin with. Right. So if everyone’s using, say, Google Docs internally as an org or Quip or Jira, and then they dump all that information into an LM and it pops out these great answers. Well, then there’s nothing in play to, to there’s nothing out there that treats LM as a as a first class citizen yet understandably, because, you know, it could just be snake oil, but. Um, I guess I’m kind of forgetting why I brought that up, but. Oh, yeah. Writing? Yeah, it’s. It’s the tools that we use to write with are still. So some of the times, that’s the thing that I have the problem with. Right. Like. I’m writing about what’s wrong with articles, but I have to write it as an article. Anyway.

Frode Hegland: But okay, that’s the thing.

Speaker5: Um.

Frode Hegland: I now have a library and reader. It’s not public yet. It should be any minute now. Been like that for a week. Um, so I’m trying to solve some of these problems for myself. And as you probably saw, the email I sent around with the idea of how to fold while authoring. You know, this is me capitulating to the fact that this is a pain in my life. When I write a long document, this the act of scrolling becomes messy. So that’s why we’ve experimented with before of folding under headings. But it’s always been a bit rough. I don’t want people to have an extra line spacing or something weird. So the function that I’ve developed and it’s not implemented yet is simply select an arbitrary amount of text that you’re done with, do a keyboard shortcut, and it’s gone. It’s replaced with a first sentence and hard brackets.

Speaker5: Basically it’s cut.

Frode Hegland: And stored in a safe place. When you click on this hard bracket with the first sentence, it goes back again. So it’s kind of like stretch text. But the point is for us to deal with our own problems. Alan To write about stuff when what we’re writing about is the fact that writing about it is shit is of course why it developed. Author And what I’m able to do as an individual is so tiny. But if there is any stuff we can do to help us even experiment, I will try to do it. Whether I can code it or whether we can just experiment with it because yeah, yeah, yeah.

Alan Laidlaw: And I’m and I’m not. For the record, before when I mentioned that struggle, it’s not like this. I’m putting the blame on articles. You know, obviously we have electric drills and they’re not hot air balloons for a reason, right? Like, we have these tools, we have to use them. That’s on us to to. No, no, no, no.

Frode Hegland: I’m just saying.

Speaker5: I know we’re not we’re not arguing for.

Frode Hegland: Our tools because also, if you want to communicate in a different way, like if you want to read, invent the web, why not write?

Speaker5: This.

Frode Hegland: You know, a poor worker blames his tools. Rubbish. Just think about the one with electric drill that doesn’t have the right bits, right? Mark.

Mark Anderson: I was just reflecting when you were talking on the thing of of sort of LMS or more contextual, I guess sort of LMS really, because I’m living through this. I accidentally and ironically in my support of the tinderbox community, because the guy who turned up, he said, I’ve got this, I’ve got this cool tool that basically makes me an LM that I think consists of about six PDFs as far as I can work out about the application, about which I’ve probably written more than any other person, apart from well, probably more than the person who made it. So in other words, his LM is what I’m writing. And, and I was, I was interested in asking why he was doing this. He said, well, I haven’t got I haven’t got time to ask or read. I just want to chat with this thing. And he was complaining it was out of date. Well, and I pointed out to him that, well, you’re now engaged in a conversation with the person that wrote what you’re looking at through the lens of the past. So why not ask me what the problem is or indeed, more usefully explain to me why it is you can’t find what you want. Because I totally accept that as being a fair critique of my writing, because that writing is deliberately designed for other people. And if people can’t find stuff, then I’ve unintentionally failed in delivering on that. So there’s a really interesting I mean, it’s something I’m still sort of noodling with and, and also wondering, you know, how how small is an LM allowed to get before it doesn’t deliver? Maybe it can be very small. As small as one document. I don’t know. I’m not my area. But but it is it is interesting. Um, another thing it shows is we have this compulsion. It’s again, I think it falls out this way that we all overvalue our time massively and undervalue the time of others. The time and thought of.

Speaker5: Oh yeah.

Mark Anderson: We don’t mean to, we don’t mean to do it. It’s absolutely not a deliberate thing, but I think it’s there in us all and it’s something I do try to hang on to that in dealing with things because often just at the moment when you’re about to say something, you know, something unhelpful to someone, you know, okay, fine. No, what I need to do is I need to try a bit harder.

Frode Hegland: But this is so important, Mark, because, you know, like the whole Socrates complaint about text, it’s not interactive. It’s not the original. You know, I’ve joked many times about developing a text system called Socrates. You know, it’s that interactive. Of course, it would be phenomenal if we could develop a layer of text that accesses the author. You know, one thing I tried to do in a video format a few years ago and I just didn’t have the heart to continue, it was very simple. You sit in front of your computer and you record yourself answering questions. Right. So let’s say you’re a teacher. One of the questions might be, is there a dress code for classes? And you just answer it and you just whenever you want, you just keep building up this thing and people can send you new ones. But the whole idea is that the interface from the other side is like a bot. You ask these things questions and if it doesn’t know the answer, it’ll try to do an AI thing, obviously. But if it does know the answer, I’ll just play back that original video. Right. And that was kind of fun, but it’s not really within my remit. But imagine if we could do that with text. Imagine that if you hand in and now that I’m in really interesting conversation with this person, that this triple letter acronym. Imagine if you could hand in a paper for an academic conference and also say, by the way, here is more, not just here is the data, but here is the authors overlay.

Speaker5: You know, why can it.

Frode Hegland: Not be an official part of it?

Brandel Zachernuk: Yeah. And it sort of reminds me of recognizing. That word.

Alan Laidlaw: Press.

Brandel Zachernuk: Is a word processor. And all of these websites and apps that ask you for the like don’t not sure if you’re familiar with like Facebook’s open graph protocol and the images and text descriptions. Um, because we kind of. Conclude that that’s meta information, but it’s actually just information. And yeah, like the preparation of a text is the preparation of an artifact for apprehension in a context. And, and people deciding on a book cover is the same thing as the image and the text. They’re both sort of crushingly mundane as, as acts and a lot of for a lot of people and a lot of times. But they’re they’re deciding what the information is. And one of the things that’s really fascinating is recognizing, one, that that’s just as contentful as writing the book or writing authoring the text per se. And two, we should be open minded about what other sort of dimensions of something are imaginable and sort of externally valid in a context that may be worthwhile for you to consider. So like my mother in law is into Morris dancing and they had a farewell thing because she’s going back to New Zealand next week, this week. Um, and so somebody had come up with a dance for the Wellerman because it was Wellerman is actually from our hometown in Dunedin and everybody was very excited by that. You know that song that was all the rage in the in the pandemic. And it’s really like the repertoire.

Brandel Zachernuk: The expressive repertoire of Morris dancing is limited and has limited external legibility to many people. Most people aren’t very good at it, and most people aren’t very good at reading it. But nevertheless, it’s a piece of media and it’s a piece that can be authored and considered and designed and sort of considered along with the sort of the the rest of the slate of things, attributes and dimensions that exist for that media. And yeah, like in that same sense, we you should have a tool that obliges you to consider the image and the description. Maybe actually being able to kind of trigger all of the Philips hue light bulbs in a in an area to be able to say what color the the the light should be when somebody is reading it. If somebody has a preferred sort of repertoire of scents that should be playing over the over the olfaction sort of systems in the area, those are all not real today but could be just as valid as mechanisms. And so yeah what what the boundaries of, of a text are what you mean by sort of composing, arranging and presenting and excluding various aspects of the sort of the things that you understood or thought about as a consequence of building them are really interesting, really provocative. But we also don’t know and for for human agents, but also for non-human agents of the kind that we don’t have good enough models for today.

Frode Hegland: Yes, Superimportant We are over time. Today I’m okay. Going further if you guys are. Um. I think we’ve gotten onto a really good topic and I certainly don’t want to cut them off. Do you guys want to do another 15? If someone has to leave, they have to leave. I mean, Brandel. I believe you have a job somewhere in some kind of a fruit orchard or something.

Speaker5: But yeah.

Brandel Zachernuk: I’ve got to drop in a few minutes. But yeah, I’m okay to listen rather than just talk and then leave.

Frode Hegland: All right. Leon, please.

Leon Van Kammen: Thanks. Very interesting points. Brandel. At some point I. I heard something like image. Is that correct?

Alan Laidlaw: Original emoji. It’s a it’s a term of endearment for rappers.

Brandel Zachernuk: Og is Opengraph. It’s the open graph protocol was defined by Facebook. So when you post a link in Facebook, then you get a picture and a paragraph. Similar thing for Twitter. So it’s a it’s a sort of a requisite component of a lot of people’s media authoring is to be able to say like and similar for the keyword keywords, the meta equivalents in the hypertext document of like this page is like that page. But in this language for something that Apple went through a whole spate of in order to make sure that non-English search results turned up with the proper priority in those languages and things.

Frode Hegland: I prefer Allen’s definition, but whatever.

Speaker5: Sorry to bring it down.

Leon Van Kammen: Yeah, that is very, very useful. I was also thinking about, um, this might be a bit of a sidestep, but going back to Frode earlier questions like, you know, if we have this spatial era in front of us and we have other ways of representing text, I can also imagine that, um, you know this and I’m going back to the summary. Sorry for that. I’m not sorry, but I can imagine the summaries might also indicate that we’re overloaded with context and people overvalue their time because of that. And because of that, they don’t want to read too much and they want to be sure before they read something, you know, what is the summary? So maybe, uh, let’s say an XR reader application would be, uh, perhaps, um. A supporting a format which always starts with the first page is a is a summary. So then it would only show that that summary page and maybe it could be a small, a spec or a specification that every sentence on that first page of, of the summary, if you click on it, it will basically search in the document for exactly that same phrase. Maybe it’s a it’s a chapter or it’s a but then you can basically go further into detail or um, so yeah, I think this summary could be, um, maybe replacing this whole index, which is usually in books like most of the time you have no idea what these words mean, but if it’s in a, in a, in context, in a small summary, you might, might have a much better entry point to, you know, maybe, maybe read more or not. So that’s just my few cents here.

Speaker5: Okay.

Frode Hegland: I’ve got a dollar on that, Not a cent, Yes, there would be nice people won’t do it. We should develop systems to fake it. Helen.

Alan Laidlaw: Uh, yeah. Uh, um, on that, quickly on, on the note of, uh, indexes and other kinds of indexes, I think that’s another worth topic because it’s, I feel like there’s a lot of unexplored territory there. Um. A summary is being one option, index being another. What about like a kinematic index? An index that that showed the joints how these links are related to one another? Like, you know, in reference, these concepts come first to lead to this thing rather than just jumping to it. Anyway, the main thing I wanted to touch on was going back to Mark’s comment and and on the theme of eroding the firmament that I sometimes do. This is this is pretty broad, I’m about to say, but, uh, I like to think about it. And I’d love to talk about trying to identify the, the, uh, background truths or beliefs of modern life. That we just assume are always that way. And they’re not they weren’t always that way. And one example might be this feeling of, well, I’m fascinated by the phenomenon of people charging for their newsletters and that people pay for these newsletters, which is obviously valuable content, but they’re paying oftentimes way more than they would for a book by that same author.

Alan Laidlaw: Right. So here’s this newsletter that’s coming out maybe weekly, maybe monthly. Right? But that this seems to be a workable pattern blows my mind. And it comes from must come from some. Beliefs that we all share that like time is money and that more content is better and and like just add to the universe with whatever you want to say. Right? And and when did that start? Right. Because like, you look at Da Vinci’s notes and he’s he’s going through and rewriting his own little passages with no with no nothing. No interest in publishing his books or anything. He’s just working through the ideas himself. And and so it’s I’m not trying to disparage the idea of, of blogging or newsletters or whatever, but it’s a wild that we’ve gotten to that place where it seems like, oh, this is obviously a your time is money, so this is how you should spend it anyway.

Frode Hegland: Not just time is money, but you know, when people want to look at provocative, you know, boobies online without using too much language, there is still a market for people being there live, even though you can get pictures for free. So the whole idea of getting close to the person, even though it’s a digital medium, whether it’s, Oh, this person wrote this today because it’s a fresh newsletter rather than something that they wrote from the market. I think the human intimacy thing is, is really part of what you’re talking about, which is, yeah.

Speaker5: Yeah, yeah.

Brandel Zachernuk: I think the connectedness, the direct, the immediacy of it also sort of feeds into the sense that it’s representative of patronage and support as well in a way where you sort of get to pay less or have to pay less when you feel like it’s a degree removed because the book is already done. Like anything, what I wanted to say before was that, um, you guys just about made me want to quit my job to go work at another big tech company because something that was really fascinating is like you’re talking about libraries, you’re talking about sort of owning media and material and being able to kind of integrate indices and and glossaries and things like that. And it’s just like, well, Apple does have some of that, but who really has it is Amazon. And if you really wanted to sort of tackle the concept of like what does it mean for somebody to own a book? What does it mean for somebody to have purchased a book and to have that constitute something that’s part of their repertoire, of things that are ready to hand? And if you think about somebody having an Amazon Kindle, not that there there are alternative ways of owning a book, but Amazon gets to decide what a book is, what owning a book is, and what having information that is sort of relevant within the confines of those books ready to hand. And it strikes me from being a fairly ardent Kindle book purchaser that they have really slept on the ability to to think about what it is that you can do with all of the media you have, how you can relate and reflect on it and the sort of recurring value you can get from it.

Brandel Zachernuk: Because there are so many people who read a lot of books and don’t reflect on it, or people who read a lot of books and have to keep some kind of alternative mechanism of of note taking or references in order to be able to get the benefit of it. And it’s not that everybody needs to be able to kind of quote chapter and verse of the latest bodice ripper they have. But I think there’s a sufficient contingent of people who have on the one side an interest in sort of providing artifacts, sort of presenting what it is that a book is in such a way that it retains value for the people who have it. And on the other side, people who have enough wherewithal and interest to be able to kind of cultivate a more spirited, intentional kind of construct of the totality of the books that they have. They’ve read, they’ve understood, they’ve interacted with that. I think that there’s a lot to be able to do there. And I’m I’m actually a little bit deflated and devastated that more people haven’t kind of seen that. So let’s do it.

Alan Laidlaw: Let’s make a new book format. Let’s take over like there. There could be so much more.

Speaker10: It’s such a yeah, it’s such a.

Alan Laidlaw: Wild case study. I’ll just say this real. Okay, Go ahead. No, no, no.

Speaker5: Of course.

Frode Hegland: Of course. Finish.

Speaker5: Of course. Finished. Well.

Alan Laidlaw: The eBooks as as a history of ebooks is actually something I’ve been writing about a little bit like not not in the history of ebooks, but that we got into this mess and ebooks as a as an alternate timeline of the Internet, for better or for worse. You know, like it’s great that we don’t get ads inside of eBooks. That’s wonderful. But I can’t I can’t highlight and grab a text from the from the highlight section. There’s a character limit. Uh, so there’s, there’s and there’s so much more that could be done in e-book format. It blows my mind. I mean, in some ways it’s good that it has lingered on. So because if it were more sophisticated, there would certainly be ways to interject more monetization and awful, you know, reading experiences. But at the same time, I think books themselves could be personal knowledge management. It accessories, your thoughts could be stored and remembered because, you know, oh, that’s like, oh, Zen of motorcycle maintenance. You know, it’s that’s where that’s related to a whole rant that I did, you know, like there could be a decentralized way of containing our thoughts inside of eBooks. That would just be beautiful.

Speaker5: Okay. Okay.

Frode Hegland: Right. So of course I agree. Um, and the quick caveats is. Ebooks aren’t really used, which is bad. Also, I like PDFs because they can be around forever. It’s just a thing. But one of the things I’ve been talking to the acronym people about is why not put the entire XML of the document in Visual Meadow or whatever? I mean, I’ve told ACM, not that it’s a secret who I’m talking to, that you know the name visual meta who gives a monkey if they want to change the name, if they want to change the format, it doesn’t matter. All I’m really promoting is the idea of making clear what something is. I think that’s a bit obvious. So I have absolutely nothing against having storing in these dead sheets of paper stuff that is live. Okay, so the whole idea of reading a PDF as an epub type thing completely agree with, right? Um, but you know there are problems of what’s available. Um.

Speaker5: And this is where.

We start with that layering and this is what we can start doing it. The Epubs excuse me, the PDFs that we make in our own community, we can build these augmentations on top. We can do this right and we should be able to make it readable in any form. You know, it’s a cosmetics issue, but with text, cosmetics is everything. You know, text is what it looks like, literally. So this is a really, really important topic. But considering the the vast amount of stuff we have as a community, this is a perfect place to start with this. You know, we talked about concept, we talked about connections, we talked about all these things. We can experiment more. And it doesn’t just have to be in visual media. It doesn’t just have to be in the software we have access to. You know, Leon is a Unix person, a Linux person excuse me. You know, so is Fabian and so on. So how can we start doing this? One of the things we want to do, of course, is have headset versions of this or thinking hat versions of this.

Frode Hegland: You know, it takes us two hours to really have a discussion and get to what we really want. And it’s kind of it’s kind of amazingly interesting. Oh, there was something else. Um.

Speaker5: Yeah.

Frode Hegland: Oh, here’s another thing. So reviewing the thing. Corrections for hypertext. One of the interesting distinctions between a hypertext of cards like literally HyperCard going from one place to another is different than if you have a like a document where you have lots of links inside it. Raskin called it card sharks and holy scrolls, right? You could say that a pdf, just a plain normal PDF that has some internal links is a hypertext document. It just happens to be bound. And we’ve talked about the notion of binding before as being really, really important right now for our own books. We only have the binding that I choose to make because I’m the editor. That’s not enough, right? If someone should be able to have the front page, not be my table of contents, they should be able to choose based on their AI system or whatever it is, a default view. That’s probably the most important thing we can do. How they view the individual sheets. Is kind of secondary in a way. It’s still important. But you know, what is the entry point into this?

Speaker5: So what do you mean, What’s the point?

Frode Hegland: Well, the entry point is currently you download future text volume, whatever you have. Page one two, three. You should be able to download Brandel version where he’s reorganized. It completely highlighted all that stuff. You should be able to download it and view it based on your own. Let’s say you have a specific set of keywords or key concepts. You should be able to open it up and say, These pages refer to this stuff. You don’t have to wade through it. So it’s almost like you have this bound traditional codex underneath, but you’re viewing it in an entirely flexible way on top. And I think the most exciting thing that we’ve discussed today, one of them most it goes all the way back to the idea of if you’re in a VR space and you’re reading something, you should be able to call upon the author. If the author had done this, stand there next to you and point to things and talk about things. We talked about that and it’s really exciting. There’s no reason we can’t do that in a limited way for PDF. One of the functions we have in author is different ways while you’re authoring to do highlights and bolts and so on to help the authoring. Vince said it would be interesting if this was exported as a view. So to use the example of Mark earlier while writing, you should be able to do all kinds of nonsense onto his work. And it’s flattened to be right in a normal way. But it’s all there in the metadata at the end so someone can choose to see, you know, Mark’s angry scribbles, Mark’s highlights. I really mean this, that connections to that and also importantly, things that happen at a conference that’s not on the papers. Et cetera. Et cetera. Et cetera, Peter.

Speaker4: Yeah, I think we really need to work with standoff markup markup in that case. For crying out loud.

Frode Hegland: Continual fight you.

Speaker5: After.

Speaker4: Okay. Also, I really like textbooks where some of them will actually have a graph representing all the different chapters and sort of show an alternate flowchart of different ways of going through the book, depending upon what level the reader is coming in at, what kind of a course the book might be being used in, where it’s almost like a meta index and table of contents at the front so you can situate yourself in whichever guided path would be most appropriate, and the author would have maybe 4 or 5 different alternate paths of how to tackle the material in the text.

Frode Hegland: Okay. I need to respond to to both of those. Number one, the first one, standoff. Fine. Have as many things standoff as you want. But let’s keep the crucial stuff in the document so we don’t lose things as far as the different paths are concerned. Anybody today like Brandel did this amazing VR walkthrough of a book where you could see the different sections in the library that can be done actually in a gallery, and it’s wonderful. But in terms of tool making, how do we somehow augment the making of these different levels without it being someone writing a whole different thing? Many levels.

Speaker5: You know?

Frode Hegland: Yeah. Mark and Peter.

Speaker4: Yeah. Um. Oh, I was just going to say that for the stand off markup, that doesn’t necessarily mean in a different document, you could have stand up, mark off as visual meta at the back of the actual document that it’s referring to.

Frode Hegland: Okay. We’re friends again.

Mark Anderson: Yeah, I mean, the standoff off the standoff notion is basically being is being able to have lots of different, different things. Um. Effectively pointing to the same text without without being stuck in the world where you can’t have overlapping, overlapping anchors. So it’s something that in a sense people like. It’s like when you listen to the British Museum, when they were describing how they annotate, you know, different versions of the same manuscript, it’s sort of, you know, it’s been around for a while. The problem because, you know, it’s a very immediate problem in some areas. The reason I set my hand up was, was just to reflect on the fact that, of course, with a book, it’s just the author’s narrative on the contents. Now, if it’s a story, well, that’s a fairly obvious mapping the more you move away from something. So the more you move into something like a something that’s teaching or a scientific or an academic journal or something. I really do think there’s a lot of value in seeing the.

Speaker5: The what we.

Think of as the published article, just being one of the presentation modes to what the author writes. I think the challenge that creates for us is that’s going to have to you know, we can’t just run on Microsoft Word anymore. It also means that for people writing in that space, they’re going to have to skill up in a way that they haven’t before. It doesn’t mean they won’t or can’t, but that they have to, you know, that they’ll have to learn new, new things to do it. I mean, I think it’s actually a remarkably positive move. And I hope things go in that direction. I’m holding my breath because I suspect I could see all sorts of reasons why people wouldn’t want to move to that. But it certainly would make for much richer digital documents. I think one of the challenges that that that a richer environment sort of sort of begs for sort of necessitates is.

Brandel Zachernuk: I’m better. Conceptual lens for the authoring environment to imagine the potential context in which the information is going to be apprehended. So like we have print preview and you have over print previews and things like that in Photoshop, WordPress has sort of preview has the ability to preview what um what your. Uh oh. Images and things like that are if you’re sort of academic authoring paper that had the ability to, to show like if you had a persistent artifact that related to how this would turn up as a citation, you would probably think about and craft that as a design object much more explicitly than if you had to sort of do the computation of going like, okay, based on what my paper is now, this is what it’s going to look like. And so having this sort of continual representation of what these sort of. Proposed artifacts and representations are allows you to design for them to to understand what the practical consequences of your your your creative decisions are with regard to what happens. So, for example, you know, going back to something like the summaries of your seven hypertexts, you might.

Speaker5: To sort of.

Brandel Zachernuk: Change or update or add nuance to your text in such a way as a GPT has the ability to to do a better job of summarizing those things. And you can sort of interpret that cynically. Or you can or you can say no. Mark Anderson is a is an author who has a desire for this information to be understood correctly. And so like the chat GPT summarization stuff is an important part of it where I think the, the intersubjectivity aspect is, is so challenging, is that if you have screwed up your overprint or your gutter margins today, there are very concrete and very hard edged and simple actions that you can take to say like, Oh, I just need to pull this tab in or oh, I need to manage the overprint of these different color separations in order to make sure that there’s enough of a buffer here and it’s going to alter the line weight that I put on the outside of my figures in order to make sure that the read isn’t going to screw up. If ChatGPT screws up the summary of your text and you can anticipate that ahead of time, what does that mean? You do? What does that mean for the viability of an ecosystem that’s supported that, that is authentically supported by that tool? Because all you know is that ChatGPT didn’t really give you a very satisfactory summary.

Brandel Zachernuk: What are the concrete actions that you can take on the basis of understanding that ChatGPT is a necessary component of the ecosystem of work that we engage in today to go like, okay, here, ChatGPT do a better job please. Other than doing its job for it, I genuinely do not know and I genuinely do not know from a sort of, again, the ecosystem perspective, what has to happen to all of the moving parts in order to for us to be able to get there. But I see it now as an urgently sort of required component of not just the tools, but also the discourse around what AI is. It’s just like, well, if I have specific intentions and it’s not just don’t use my work, what can I do to make sure that I have the ability to intervene on it? Like positively, Do I merely need to lobby ChatGPT to give a better summary here? Or what are the ways in which we actually intervene on this space in order to make sure that it actually does at least some of the things that we want and ideally as well, maybe doesn’t do some of the things that we don’t want.

Frode Hegland: That is a very important. Consideration. One thing that came up earlier is I wanted to jokingly complain about Mark and Alan. The conversation today, Mark and I had one of those arguments around the thing like we often do, and then later on Alan came along and just with a big shotgun and shot a hole through things and said, We’re not looking deep enough. So what I mentioned kind of a while ago is the idea of when you open a single document or a corpus of documents, how interesting could it be if there are irritations or questions? So it’s not actually trying to give you summaries. It’s trying to ask you questions about what issues might be or pointing out stuff.

Speaker5: Because we have.

Frode Hegland: To decide what the heck we want these things for, Right? Do they? Do we want them to give us answers or help formulate better questions? Right.

Alan Laidlaw: So quick thought on that. Um, you know, around the 16th century, maybe early 17th, there’s this practice in scientific and the burgeoning scientific publications of reviewing everything that came before the prior art right to establish that I know what I’m talking about, the celestial bodies moving around, you know, And that was a that became a convention and it stuck around for a while. Uh, whether it’s good or bad, I think it was it worked in a time when everybody understood it as a convention that if you already were an expert in it, you could skip that stuff and jump to the last third of the book where it’s actually new stuff, right? Um, so I bring that up to be like, maybe what’s needed. Outside of a technological solution. Kind of what new conventions would be useful, right? Because asking questions is is useful in some ways. It’s also super annoying, perhaps, for a reader who feels that they have had their freedom taken away when maybe they just want to scan. Right. So would a convention be Here’s a scannable preview viewable like Brenda was mentioning a context window of what is. What is following and to extend it. Maybe this is how it matches with what we know about you, with what we think you’re interested in. You know, um, maybe that becomes like.

Speaker10: A.

Alan Laidlaw: Newark Convention for artifact or published material.

Speaker10: Um.

Alan Laidlaw: And I brought up the history example to be like, That’s just one kind history just wound up adopting that maybe it’s time for new ones anyway.

Frode Hegland: And there you do another shotgun. Yeah, I mean, this is what I really I’m very, very, very grateful for today. Specifically grateful for today, because we’ve started questioning our assumptions. And that is obviously what this is about. So I can imagine one implementation, just to make it really simplified. What we talked about is imagine a library where you put in your whatevers, I’m not going to mention format and you can specify what you want to do. I want to learn stuff easily. That’s one level, you know, And then you have specific prompts to deal with the documents, but you should also have a different one like we just talked about. It’s like, what’s wrong with this? So then we start having a copilot and we also start addressing the issue of different levels of interrogation. Right. There’s nothing wrong with an eye where you choose to say present this as a summary in simple language. If it’s outside of your field and you want a cursory thing, nothing wrong with that. But equally, there’s nothing wrong with having some seriously deep probing questions for something that you really want to question. Right?

Speaker5: I mean, imagine.

Frode Hegland: A prompt that is, is there a PhD thesis in questioning the premise of this paper?

Alan Laidlaw: The answer had better be. Yeah. It seems like there’s almost always that answer would almost always be yes.

Frode Hegland: But but then imagine if we managed to build a knowledge base together. One thing that I found really shockingly brilliant with this PDF pals was when I took one of our journals and put it in because we’ve got a ton of transcripts. And I said, What has Adam Warne said? It told me really good summaries of Adam’s main points as far as I remember them. Not necessarily perfect, but imagine if we had a way to feed our LMS that we’ve talked about a lot here in such a way that maybe it is these transcripts, whatever. So when you read a corpus, maybe the AI could take a stab at, you know, what would Alan have to say about this? It may be completely wrong, but it may help a provocation to think right.

Mark Anderson: Yeah, I think it’s interesting. It also made me think, well, isn’t this a bit like P hacking your data? Um, not intentionally. So that I think it’s a really nice idea, this question. Well, how do I train the LM But that, that itself questions the whole notion of making LMS because the idea is, well, you make the model and it does all the work from there on in. Um, but I certainly think there is, I think the lesson I take from the current state of the art is there is definitely a need for effectively, um, a teacher’s hand on the rudder. To sort of basically say, okay, well, that was a good guess. But you you left out the following points. So, um, yeah, expresses more clearly quite how that’s done. But given that it’s a completely opaque machine inside, I have no idea. But I think it, it, there’s a clear need for it.

Speaker5: Yeah. We need a rudder to.

Brandel Zachernuk: Put a hand on.

Alan Laidlaw: Oh, go ahead. Sorry.

Brandel Zachernuk: Just that we need the rudder to put the hand on. Yeah.

Alan Laidlaw: And as far as this being my last point, but on that, there is even something to a difficult problem. Right now I’m reading a Dewey’s How to Think and it’s from like 1910, right? Yes. And since then, you know, the disciplines, the domains have moved on. Things have been disproven or whatever. I even saw a review that was like, you don’t need to read this book. You know, I’m so glad I’m reading the book because his way of thinking. His way of presenting ideas. It doesn’t even matter to me whether those ideas are at this point, disproven or not. It doesn’t matter because it’s I’m learning a new way of thinking by just being by spending time with him. Right. And so that is a another thing that’s potentially lost or challenged by a world of increasing optimization. Getting an answer to your question in a prompt doesn’t necessarily help you learn how this other person thought.

Frode Hegland: But then we have to look at how deep we want people to look at this. And this is really important. The kind of questions we’re asking here is not really useful in everyday thinking, but it is really, really important for someone who’s doing an important job trying to get the bottom of things. Yes.

Speaker5: Of course.

Mark Anderson: What we need and what we’re willing to do is two separate things. That’s half the problem. We want the answer because our time is important. And and clearly the only reason we don’t know it is someone is unkindly not giving us the answer when in fact, the reality is very often it’s more nuanced. And indeed, the answer to the question might take several loops through the subject.

Speaker5: There are there.

Brandel Zachernuk: Are degrees to which the formalism and the structure of the way in which an answer is understood to be constituted and and presented can help pull people toward or away from certain complexities. And, you know, one of the things that I think I’ve described it here before that I’ve played with in the past is so, you know, when you’re building a web page, as I built Apple.com for a number of years, you need to look at it on mobile phones, you need to look at it on iPads. And various people come up with saying like, okay, well, this is this is these are the dimensions that it will be at. And other people go further and say, these are the dimensions and this is the shape of the phone that somebody is going to be looking at. So so you open up Xcode and Simulator, then you’ll actually have the actual shape of the phone, the device, and it’s kind of photorealistically rendered. It’s better. And many people will not sort of enunciate the reasons for which it is better to look at it on a real device, to have it held in the hand at the size and the distance that people tend to do. What would be better still and nobody does this, and understandably so, is go and test it. Standing on a train, you know, go and walk along the street while using the website, because that is actually where people are.

Brandel Zachernuk: And it makes me think about print preview. Like I was saying before, and something that I’ve talked about sort of jokingly with the people who literally work on mail in the past is what would you what would a sort of a preview of the recipient preview of a document look like, where you sort of you build a sort of a construction of what it looks like if they get this email on their phone, not just what does it look like on their phone, but what does it look like as they get the email on their phone, Having somebody sort of scroll through it and look at it and read out loud or possibly kind of murmur some of the statements in it, what does it look like if they’re sitting at their desk? What does it look like if they were in the middle of a discussion with somebody else and they received that email and all of those sort of recipient contexts, those situations in which they apprehend that information are are very different. And the meaning that they’ll get from it is very different. And the sort of the obligatory sort of work you do as a consequence of anticipating and expecting those those contexts would probably be pretty different. And I think that we don’t have. We have a.

Speaker5: False.

Brandel Zachernuk: Consensus around what it is that the information is and what sort of context in which people come across it. And we can probably sort of tailor and and pitch formalisms around, reminding people of some of those things in order to make sure that they understand that what they’re making is not necessarily what they think they’re making. You know, people say that 60% to 70% of cold emails are misunderstood, misunderstood and mis misconstrued. And to me, what that means is that there is not the information that people think I mean by by by definition, what it means is that the information that people think is in the email is not in the email. But the answer is to sort of try to rehydrate and think about what it is they’re communicating and attempt to communicate with or through something slightly different. And I think that tools can help us do that as long as they’re sort of presented in the right way to go, like, okay, so here’s what it is and, and kind of coach people through it. And I agree that some of those things can be pretty repetitive, pretty stupid. A lot of people have kind of say they have a little me running in their heads and it improves their work because they have the ability to think about the repetitive things that I would ask them about their work in order to think about whether the work is being done correctly. Excellent. And I definitely wouldn’t mind outsourcing that. And I think that an AI would be able to do a bang up job of just asking that like maybe don’t do it here. But I’m very repetitive as a human being. Um, and yeah, like I can take that, that’d be fine.

Alan Laidlaw: It was literally just writing about this this morning in the, the way that monks would, you know, whatever themselves. I was working on my about page for my website. Right. Which I will never publish probably because I can’t answer the question about. But I was this time going on about how odd is it that we have this thing called about pages like how weird is that? Right. Like it’s it’s now a convention, but the very thing that you want from me is an X on a map, Right. Of meanness. And the very thing I want from you, this author, this, this user that that I’ve never met is what do you want to know? You know, like what is what is your X and your map? And I don’t have that. And so we fall under these rituals and these performances of like, well, I’ve gone to this school and I’ve done that right and that because that’s all we have. It’s hilarious. It’s such a simple seeming problem. But.

Speaker5: Yeah, okay.

Frode Hegland: We need to wind down. But so I think what we’re talking about now is applying context on reception, which is really, really an interesting thing. Um, it’s not that long ago I got an email from Alan that I thought he was really pissed off. That’s how I read it. And we had a chat and you know, he wasn’t, but it’s such a great example. So when I talk to frequently and you know, he didn’t say anything bad, it was my overinterpretation, you know, with the Apple Watch, it does different kinds of things now. But I did hope it would develop a language where when you get a message, if the answer is yes or no, it would be very distinct pings. You know, imagine if you’re doing a presentation, if you get like a specific tap, you know, someone is telling, you know, you don’t need to look at the damn thing. Yeah, that kind of stuff from where you are and when you’re receiving it is very, very important and that creates fun things. I think it cuts through a lot of our stuff. I’m now with my thinking cap. I want to look deep into this issue as entirely different from I’m a teacher running to class. I’ve got my GPT thing. Just please remind me what this concept is.

Speaker5: Yeah, that’s.

Brandel Zachernuk: Awesome. One if by land, two by sea. That would be awesome.

Speaker5: I didn’t have.

Frode Hegland: Edgar just walked in. Can you please repeat.

Speaker5: One.

Brandel Zachernuk: If by land, two by sea?

Frode Hegland: Oh, yes, yes, exactly.

Mark Anderson: But. But that thing, that of unambiguous signaling made my mind fly to a favorite movie Galaxy Quest line about with No. You know, I made the cut the signal gesture and I said no, you made the we’re all dead signal. Um, and of course, you know, it’s it’s hard to describe. I mean, I really like the idea. I’m sure that’ll keep some engineers up late to try and figure out how you do that.

Alan Laidlaw: My wife just hit the skids when we were running. She. She went palms first into the ground because she had set up like intervals on her watch for running and stopping and running. But she the phone was her watch was muted. And she didn’t realize that she was just trying to fudge with it to to unmute it and topped over. So like that’s a whole nother phenomena that happens where you know, affordances a, a wealth of affordances introduces whole new things you would have taken for granted. You never would have fallen. It happens when we’re distracted by this stuff. Anyway, I got to go.

Frode Hegland: Just one second really quickly. Edgar just came in, just mumbled in my ear. I was watching Harry Potter on my own. It wasn’t scary. Reason it’s relevant is some of the more adult movies he’s watched Iron Man and Harry Potter and a few things we watch with a little bit of ambient lighting and we watch maybe 20 minutes. We do. You know, next day we watch another 20 minutes turning the lights off, watching the whole thing. Two entirely different experiences, of course. Right. I think that very much feeds into this. How immersed do we want to get?

Speaker5: Wow.

Frode Hegland: It’s not always good to get too immersed.

Speaker5: Yeah.

Alan Laidlaw: That’s a good note.

Frode Hegland: And Brandel. I’m still very unsure whether I’m able to make it to California for Vince, but he’s trying to find some opendoc people for me to talk to. So we’ll see. Anyway, um, yeah. Thank you guys for today. We have gone way over and it was particularly useful. I’m sorry. I was just going to say, I really hope on Monday I can be there. I’ll warn you ahead of time. Not if not, but we’re going to be in Malta for a few days and then Sicily and taking the train to Rome to see Mark Anderson. So that’s that’s the plan. So I’ll do as best as I can.

Speaker5: The sa.

Frode Hegland: Brandel. You were saying something when I was mumbling.

Speaker5: No, this has.

Brandel Zachernuk: Been really, really productive for. For sharpening my hostility to. I’m much more particular point like to a constructive point as well. I think it’s rather than just like. Yeah so thanks really appreciate the work.

Speaker5: Yeah.

Frode Hegland: No, I think it is exactly what we’re doing. We’re learning more to emphasize what we want and then we’re better understanding what the issues are. Yeah, Greatly appreciated. Have a good week, everyone. Take good care. Have a great week.

Speaker5: Bye.

Speaker4: Bye bye.

1 comment

Leave a comment

Your email address will not be published. Required fields are marked *