Mark Anderson: I catch you because I wanted to get back to you. I’m sorry. I did see your thing about the folding, and I just wanted to clarify one interesting comment you made. You said that folding had proved to be sort of doubtful or difficult or something in the past. Is this something you found it difficult in author or in general?
Frode Hegland: An author.
Mark Anderson: Okay, cool. Yeah.
Frode Hegland: I know there are other applications like Claws came back and talked about them, which has built in folding of sections. I know it’s been done, but every time we’ve implemented it, you know, you do that, you work in the document, go back to expand either as an extra line break or some little niggle.
Mark Anderson: Yeah. So. So the thing is, I thought I’d catch you here rather than because it make for a rather long email. My thought was. Yeah, I mean, as you say, I mean, in a sense there’s nothing new, but it’s something you want to do and that’s fine. And it seems to me in essentially what you’re doing is you’re doing ad hoc folding. So in other words, you want to do folding, but basically on a on whatever bits you want to fold, because essentially if I as I really see it, it’s more that you want to hide away bits that you just don’t want to work on and also sort of be able to put effectively bits that you want to almost on the same screen.
Frode Hegland: Few days I’ve been working on the hypertext history corrections of the paper. Yeah. And I find it really difficult and tedious, but I’ve managed to get into the rhythm of it. And of course I’ve cited you and Dave, which has been very useful. I categorise hypertext development as either tools for thought or knowledge representation, which is hugely helpful. So in doing that, there are big things like the section on PDF, which is digital document, which is outside of hypertext. When I scroll up and down, it’s a huge section that’s in the way. So what I’ve done of course now is just cut that into a new document, saved it. So the whole point here, and I’m sure you get it just to make sure we’re on the same wavelength is cut and paste is very robust as long as the cursor is at the same spot. Okay, so all this is cutting it and then you click and it comes back.
Mark Anderson: Yeah. So the sort of the. The thought I had, which was it was easy to say then, right, Because it would come across all wrong, is that the worry bead is just creates a new problem in that if you’ve got a complex, if you’ve got a document that’s using a fair degree of referencing and linking and all that sort of thing, the danger of some sort of copy pasting it elsewhere, certainly outside the document is now.
Frode Hegland: Not going outside.
Mark Anderson: So if it’s staying, if, if, if effectively well, put it this way, if it’s if it’s going somewhere in the document that effectively it’s being hidden. But as long as it’s within scope of the things that need to know where the references are because, for instance, what might happen is that you want to look at you want to look at something like the reference list, but you happen to have hidden a bit that that is, say, creating some references that you would expect to see but now aren’t there because they’ve been cut. That’s that’s a slight sort of secondary concern I had given the nature of what you’re trying to do with author, you know, and I don’t.
Frode Hegland: Have a reference list as such available, which is something that I want to change up here. Yeah.
Mark Anderson: Do do you get my general point? Because I didn’t want to belabor it. It was just that in solving one problem, you don’t give yourself another because the worst case would be that you get all this work and you commit to it and then you find, Oh, holy crap. Actually, it’s now stopping doing something else.
Frode Hegland: Absolutely. But let me just show you. So this is my actual live paper. So I do cmd-x on this section and it’s of course, gone. You have this, the Cuttings. Everything is saved in the cuttings unless you cut and paste. It doesn’t do anything. Copies just right. But the whole idea is just that you put that cut bit in a specific list for author and then you replace it with this other text. And when you click on that, it just re-pastes.
Mark Anderson: In a way sort of what you’re doing is, is sort of stretch text and I that sounds rather trite. No, no, you’re absolutely right. Thank you. That is a you are affecting ad hoc stretch texts and I the thing I’m working on for TED Garrison at Australia at the moment. I was shocked to discover that I did I did this for him back in 2010, but his blog uses extensively, has collapsing headlines. It has. It actually has collapsing headlines for images and figures. And it also has what he calls parenthetical text, which is stretch text. And it’s really quite funky. It has reusable stretch text. So you have.
Mark Anderson: Sort of bits that are reused that reuse other bits. You know, it’s like a set of nested dolls. It’s really quite elegant, but it seems to me that that’s a useful alignment to make, not so much to explicitly call it stretch text, but but to keep that in in the picture because that what it might mean is you may it may all resolve that effectively what you end up with is a stretch text feature that actually does the thing you’re wanting as opposed to the thing you’re wanting being stretched. If you see what I mean, it gives you extra bang for your buck.
Frode Hegland: Yeah, no, absolutely sure. But the thing that has come out of this for me is it’s arbitrary. It doesn’t rely on a heading.
Mark Anderson: You know. No, absolutely not. And I totally get that because.
Frode Hegland: No, I think you do. I think you really do. So here’s you know, if I had done this bit, this is, of course, what would appear. Yeah, right. And using the hard brackets, it is very stretched Texty. There’s no question about that. Yeah. And the concern I had that maybe people would edit it or ruin it. It’s the same as with citations. You know, citations and author are just in brackets. You click on it, you get stuff.
Mark Anderson: Yeah. So if you click in, if you click into the bit you’ve excerpted because I don’t know if this is actually live or not in the app, but, but I assume that when as you as you intended to implement that, were you to click in the the as it were click into that, then it would magically bring the thing back.
Frode Hegland: That’s all it would do.
Mark Anderson: Yeah, no, that’s fine because. Because that gets you. That gets you around your problem of, of effect. Well the only thing, the only thing you might unintentionally do might be to would be to say do some edit that involved you say selecting something that included a collapsed section because you weren’t paying attention deleting that. Now, that’s one thing to just think you might that’s one thing to probably talk through with your programmer is, okay. What happens to that instance? Do I lose? Do I lose the hidden section? I suspect the answer is. The answer is it’s whatever you want it to be. But that’s that’s just one edge case that I could see occurring.
Frode Hegland: I think that’s a really good point. I’m going to note that down.
Mark Anderson: Because my supposition is effectively you wouldn’t want that to happen. So worst case, for instance, what might occur is all that text goes, but as it were, the stretch text that was in within that section is still available in whatever buffer it’s in. So it could be called back. It just won’t it won’t be visible on the page, but it is not lost because that could have been tons of work.
Frode Hegland: Yeah, no, I’m noting that down. I think that’s a very important point, actually.
Mark Anderson: I mean, I do think I do think I do think there is mileage separately in being able to to collapse collapse to a heading, which is which is distinct from the the zoomed heading which you have which which does a different purpose. But in terms of general display and again thinking back because in my so I’m still working on again for this guy um TED göransson’s thing is so all his within an article on his blog all subsidiary headings basically open collapsed and you open them up if you want them and it has a sort of teaser text. That’s another interesting thing actually that comes out of this. And for, for this and his parenthetical, his stretch text, he has what he calls a sort of teaser text, which basically tells you what the bit is. You could think of it like a sort of, um. Either a sort of page header or like an epigram for a section. But it’s another interesting idea that that it’s a it’s a different way of sort of hiding stuff. So you might say, no, no, I just want to hide the whole of chapter two. And there’s just a little bit that reminds you what it’s about. Now, obviously, if you’re in the author, you sort of know, but it’s a useful general thing because it may well be it’s there’s nothing to say that you might not give a document that you wrote to somebody else who’s going to read it in author for whatever reason.
Frode Hegland: But. Well, I don’t. Author is an authoring environment.
Mark Anderson: Yeah, well, all right. Here’s another way, another way to think of it. But it might be that you end up with a way that you have a sort of a an output mechanism. Forget exactly what format is going to where, for instance, you have this sort of. Little descriptive text available as a prompt. So, for instance, you might be able to, you know, effectively see what all the chapters are about or what the sections about, because there’s just a sort of sentence, you know, it says what it is. It’s sort of quite interesting because one of the things I’m really chewing on at the moment in this stuff I’m doing at Southampton is what it really reminds me at is that, you know, trying it’s a it’s a real social engineering problem. But breaking people out of PDF not because it’s bad. That’s that’s not the point. It’s basically where information goes to die, notwithstanding things like that’s not true. No, no. Notwithstanding things like visual meta which are which are addressing the same problem a different way. But the reason I’m the reason I’ve come to this is, is that so the project I got pulled into is basically doing a what I’m calling the physical sciences data infrastructure. So the idea is, you know, you keep the data, not just the published papers, but what’s massively apparent is that the questions that people want to ask, which are effectively ChatGPT, like questions like, you know, show me all the interesting information on X, um, require a degree of.
Mark Anderson: A message today. So that just isn’t coming from what people can be bothered to store. And people are using linked data, for instance, but linked data. Linked data will link the data you give it. It won’t tell you that that you haven’t put any useful information. So what you end up with effectively is you get all the countable things like when was it published? Who was it published by? All things that are quite easy to find out. And then effectively it points to a glob of info, which is the paper. Now the paper normally is a PDF, but that’s almost by the by I mean it could be HTML, but the reason is that so there’s a naive assumption, untested, that actually the answers to all the questions somebody might want to ask are in the paper. And I believe that’s fundamentally flawed. And so the pivot that I think, you know, science ought to be making is saying, okay, the paper is actually just an output representation. So in other words, what you write is the data used to create the paper, which also lends itself to the fact that people talking about structured documents. And this came up when we were talking about policy and structured abstracts and things because to a certain extent. It would also be easy to build much more better and more structured PDFs or whatever format from from information that sort of knew what it is.
Frode Hegland: I can tell you the on that last point, one thing, if I’m now going to be talking to somebody, so if the funding is there, it is to do all of this as research, then one of the first things I’ll do is an export mechanism from author to approved word for ACM because the template system, they have no absolutely crazy lunacy. So in terms of structured writing, I agree with you. Peter. Yep.
Peter Wasilko: Hi. Yeah, sorry I’m late. I just wanted to finish breakfast first so that I could come on Cam, without you seeing me driving food in my mouth.
Frode Hegland: Not a bad idea. We were just talking about this hiding thing, but. And now I probably have lots of private stuff on here. But anyway, um, so Mark was talking about stretch text. I mean, one thing, as of course you all know, when you click on the citations that are numbers number, all of this good stuff comes up. There’s no reason, Mark, that from what you said, that if one of these was actually. Folded section that you could choose to keep it folded on export. So in something like Reader, you can click and then you see it in a pop up. Of course it would be nicer to reflow, but in PDF that can’t be done. But at least having it.
Mark Anderson: That’s the interesting thing too about um. It’s terribly. It’s terribly easy. It’s so easy to unintentionally so strain to a zero sum argument about PDFs. But but, but there is one of the things I’m constantly reminded about is we’re using something that was designed to make things print out the same on a 1980s microcomputer. That’s what it was designed for. It’s traveled a long way since then, but it suffers dramatically from that because. Different PDFs may Look, I mean, the typographic quality tells you nothing about the, as it were, the cleanliness of the inside of the PDF. And most tools that most of us have access to. Um, because basically unless you start paying real money, you’re not going to get anything that will show you really what’s inside a PDF. Uh, and so they can be marvelous, they can be marvelous and full of all sorts of wonderful stuff. Most aren’t. And what’s really surprised me, even recently when I was doing some text extraction from, I think some 2021 papers and and I was still getting results that look like something that would come out of 1990s OCR. I have no idea why. But no.
Frode Hegland: That’s of course a valid criticism. And you know, since we have been talking, I have implemented the library in here, by the way so we now have this. Right. And we can do selections and it shows up here and it does search and all of that stuff, right? So there are many PDFs, no question where it’s just, you know, you think you’ve selected something and you’ve selected something else. There’s no question about that. But I wouldn’t say that’s an inherent problem.
Mark Anderson: Well. Well, the thing you see, that’s what I thought. I was using some modern PDFs. And it turns out for reasons I don’t know, because I don’t know what was. Well, here’s the thing is, I don’t know what was used to create them. And it could be something that’s been knocking around because a lot of people don’t invest in their tools and they’ll take whatever it just works or borrow some thing. And it’s sort of difficult because. This really came home to me when I when I did the I did a plaintext export of the whole ACM corpus in what must have been up to 21 because. Uh, um. Somebody I was going to work with anyway wanted that because they were going to do some type. Um, digraph graph analysis.
Frode Hegland: I accept that not all tools are optimal. I accept that PDF can carry a lot of nonsense. By the way, look at this hit h for highlight. See on the left. But, um. Then the library automatically. But when you made the assertion earlier that PDF is where data goes to die, I think that is exactly the opposite. And the reason for that is going through the phase that you’ve gone through a long time ago and a much deeper looking at older work. You know, Flash is dead, Web pages are dead. You know, old formats disappear. The fact that PDF, especially PDF/A, you know, the archival version. The fact that it’s just text for a lot of work, just text is where it is. It’s the most. Long lived medium ever.
Mark Anderson: Yeah, but what, what what we’re arguing.
Frode Hegland: Whether is bad or good. Quick.
Mark Anderson: It’s not. It’s not as simple as that. But what PDF seems to lack is, is a is a consistent notion of a clean. Text because the page is postscript. All the things like the page numbers and the page headers are effectively all part of the same page because the tool was designed. The format was designed for paper output.
Frode Hegland: Right. Okay, Mark. Look, there’s a lot of things we can discuss and there are absolutely things that could be improved in PDF. But if there are better real alternatives, we can spend time on that. But I honestly think that it isn’t that useful because nobody.
Mark Anderson: You’re missing. You’re missing the point. I’m not. I’m just trying to say that I think it would be really useful and I see no reason why PDF couldn’t embrace the notion of having effectively, for want of a better word, a a clean version of the text of the document as opposed to everything that’s physically printed onto the page, which is where it is at the moment. So what you actually often have to do is you have to extract or you have to d typographically you have to typographically unset it to get back the digital text which you read on the page, which seems to be pointless, make work. And it’s why there are, you know, there are in fact whole academic communities I discovered whose entire work is basically trying to do this, which is insanity in this age. So we’re taking digital text, putting it into a format that we then have to do significant work on just to get back out the text we put in. It’s as simple as that. And that could easily be solved by PDF embracing the notion of having a clean text layer because machine readers, that’s what they want. They don’t want all the other dreck because that’s only useful for humans.
Frode Hegland: In my experience, you are correct, but it is less of an issue when, for instance, do. Here, I’ll show you guys. So I was just showing everyone have a library now. Brandel. It’s so cool. But anyway, so where is it?
Brandel Zachernuk: But.
Frode Hegland: Yeah. So. Okay, let’s open this one. Our journal march. Right. And go to whatever page when I select text. It’s fine. It’s a modern you know, don’t spend money on my PDF rendering engine. I don’t have money for that. It’s it’s the normally available correct thing, you know, and then run things like ask AI on it or whatever it’s given it proper text. If I paste it somewhere else, it’s nice and clean, proper text.
Mark Anderson: Yeah. No, no, I don’t doubt that. But. But not everyone is using author. That’s the point. There are lots of, of course, the general population of PDFs, the larger number of PDFs because I’m just my experience is just using the PDFs that I need to use in the course of work. Um, that’s why it’s not, this is not a critique about what authors doing. No, I understand that it’s a structural weakness.
Frode Hegland: In the output format. Whatever it is, someone will be able to do it well and someone will be able to do it not well. You know, that’s always going to be there. Even if there’s introduced a fantastic system like you’re talking about, there would be that issue.
Peter Wasilko: Yeah, I really wish that we had stand off markup for PDF so that I could add annotations in a separate text document and then have them get murdered by the rendering engine.
Frode Hegland: You know, I’ve been dealing with too many old documents to think that it’s actually really nice to have everything in one because connections die, links die, things disappear and so on. By the way, hang on. Just a little update. I’m just going to pause for. So with that potential, that’s a lot of this work is all with a view to get it right in 2D so we can go 3D. Mm. In October. November, the Division Pro won’t be available for developers walk in yet. Or will it? Because it’s its labs software. It’s not for the device. Right?
Brandel Zachernuk: To the walk in labs, I believe will feature the device.
Frode Hegland: Okay, because. I’m looking for reasons to go to California. I can’t remember. October, November, obviously not when we had the future of text. I need to be here. But Vint is having a big birthday party. So if I can do other useful things like come by you guys. Yeah, then I might go. The guest is really good schmoozing list. It’s a bit crazy.
Brandel Zachernuk: That sounds that sounds like it would be a really wonderful thing to go to. Um, I think to the best of my knowledge, that the labs feature physical devices, not that you would necessarily have, you know, one for one dedicated on it, but I’m not sure how the time, like the time period that people have with them. I’ve seen from the news that they’ve maybe been undercapitalized and and also at this point that there aren’t any on the East Coast of the United States. I don’t know if it makes much of a difference to you, but I think that that’s a little silly. So I’m interested in finding out if that’s happening elsewhere. There is stuff in Germany, right? Like there are there are workshops in Germany.
Frode Hegland: Well, there will be something in the UK, but until I get the funding that I mentioned, if that happens, then I don’t really have the money to build the machine code. I’m spending some couple of thousand now just to fix the map view and author because actually I actually can I show you guys this? It’s really relevant to us. Yeah. Okay, fine. Good. Thank you. Right. So in author we have on the map here we have this hide and show. But that’s all you can easily toggle, which is nice, right? And I think that’s crucial because otherwise it’ll be overloaded. But one thing that I desperately need is when you bring something back, they should all be selected. So that you can instantly move them somewhere else or choose to do an automatic layout like horizontal or vertical. It’s, you know, really, bitty to do this. So I’m also thinking in a VR space you should be able to just click all the names, have them selected, and then you can do something like this and then you can work on them further.
Brandel Zachernuk: Yeah. Yeah, that would be really great to have available. Um, uh, if you are arranging those items, then I would strongly recommend looking to see if there are any social network analysis algorithms that can be leveraged. Things like the coral heron, vast multiscale and the there’s a Japanese one as well that reproduces the appearance of an org chart. Now it mostly works for directed acyclic graphs, which yours is not, but. But giving it a go is pretty useful. Are present in the Cambridge research Node application. That’s for Windows for Microsoft Excel, but they’re generic algorithms and very much worthwhile. And don’t say you know pay don’t pay one of your developers to recapitulate them but to to seek out whether one already exists in such a way that you might be able to make use of those arrangement algorithms.
Frode Hegland: It’s really interesting you should say that today, because one of the things in the map view is that when you assign, let’s say, a person, it’s italic, it has that display. I was thinking of adding one called prime object or primary or special somehow, and it makes them red. So when you’re talking about directed graphs, they can then have a special value. Mind map if you want it from that or whatever.
Brandel Zachernuk: Yeah. Think that would be really productive. So so the other thing. Uh, is that those. Well, I mean, it doesn’t matter for you to particularly, but all of the sort of dev kitchens. I don’t know if that’s the external name. That’s what people have been calling them internally. Don’t tell anybody. It’s not like as secret as Visionpro was, though, so it shouldn’t matter too much. Um, they’re primarily focused on native applications at this point. Yeah. Um.
Frode Hegland: Well, that’s what I want to do.
Brandel Zachernuk: There hasn’t been. No, it makes perfect sense. It makes perfect sense for you and what your application currently is. There aren’t any dev kitchens for web yet, but. It should. It’s sort of dawning on people that that should be somebody’s job to be responsible for making sure that people are aware of the necessity for building things for the spatial web. Okay.
Frode Hegland: I don’t know your layout, but I’m pointing at you.
Speaker5: Well. Yeah. Yeah.
Brandel Zachernuk: So that’s that’s the that’s the growing consensus for the company that we should have somebody that should be me and that I will hopefully have the ability to conduct and organize web focused discussions, which maybe I might be able to get around and stuff. My travel actually to Spain was declined, so I won’t be coming to won’t be coming through around the same time as you all are in hypertext fortunately. Mean I would have missed you anyway, but so I will be sort of attending that conference from, from home. But um, that’s really at some point in the next. Yeah, it’s disappointing. Um, we’re working on things like the model element, and I don’t know if we’ll get to the point of talking about things like 3D yet, but, you know, that’s obviously on the cards somewhere for somebody. Um, but yeah, the sort of, the function of like my stewardship of the spatial web and, you know, I think specifically those textual functions and the function of web as a text are something that will be particularly keen to hammer on because a lot of the folks who are involved with even sort of spatial web stuff in Apple, but especially elsewhere, um, don’t think of. The web necessarily as still being primarily for documents and information and text. They think of it as a as a mode, a runtime, we call it, for being able to access things that happen to be web features but are mostly just for making video games or watching videos. And I disagree. I think that even though a lot of people do that on the Internet, a lot of people are consulting texts, even if it’s Google searches or Wikipedia answers and things like that. So I’m, I’m, uh, unfortunately, uh, largely alone in the world of people who have any sort of ability to bring any influence to bear against the concept of the spatial web. Um, but I look forward to trying.
Frode Hegland: Many things. Number one, feel free to invite anybody you want to contribute to the book. Partly on that on that topic, so that partly it makes them think about it. So use it as an excuse if you want.
Frode Hegland: That’s part of the reason we do the book. It’s not just for the reader. It’s to remind the authors their issues. But secondly, going through all kinds of old hypertext stuff this week, one of the things that Ted Nelson hates is the fact that documents can’t talk to each other. Obviously, he wants visibly connected links, right? Of course, you can have that within an application, but not into applications. And the talk that I’m doing at Human in Hypertext 23in Rome coming up is all based on where are things and we are space like we’ve been talking about for so long now. And I’m guessing that things that are in the frame or the volume or the space in vision cannot visually talk to each other, right? It’s the same as applications in a sense. Right?
Brandel Zachernuk: Yes. I mean, the standard application model applies in that if one application exists, then it might be possible to seek entitlements. I mean, if you author both of them, then you may have the ability to do that. But you know, if there’s Photoshop for the system, they haven’t been ready for it. But, you know, I don’t think so. Like GarageBand is allowed to know that that safari is open and vice versa.
Speaker5: Oh, really?
Frode Hegland: Okay. That’s interesting.
Brandel Zachernuk: Without Well, you know, the application sandbox model is well, well regarded kind of fixture, which is means that it sort of reduces the room for abuse, but it also means that these applications effectively expect to be existing in a in a vacuum even when they know that they’re not. And so that results in sort of things like Safari only having very diffuse signals like it’s under memory pressure is what they’ve talked about in the past. That’s on iPhone and Mac, when it’ll jettison various resources if it’s sort of leaned on by the operating system to say, is there any way you can be thinner. And so it’s likely that those kinds of infrastructural architectural kind of decisions sort of remain within vision OS as well. That said, on the web, if you are responsible for either the same domain of multiple pages that happen to be open, so if you have future text or info and or future text.info and you have three windows open in different places, they can talk to each other. They know how big they are. They know that you can send information between them and things like that. And likewise, if you have the ability to stitch things through some kind of intermediating web server that each of them is allowed access to, then you would be able to do that. So not that you can, but if you have Wikipedia and this other one, or if you have web extensions that permit this cross-site thing, then you would have the ability to do that arbitrarily. The downside of that obviously is it’s a tremendous vector for abuse. So, you know, that would be a really. That would be a really carefully you would either have to very carefully protect the ability to maintain that channel between all websites or or have it fall prey to whatever thing that that sort of decides that it wants to be able to get things from banking information and stuff like that.
Frode Hegland: So a very simple question that’s really important for everybody who was going to do any kind of development, either real or let’s not call it virtual, but hypothetical in this environment. Let’s say for me I have an author view. It’s the main view. If I now as a developer, choose to allow the user to invoke the map at the same time. Will that have to be within that frame or will it have a free floating frame like a window?
Brandel Zachernuk: If it’s the same application, it can open as many spaces as it wants, and they have full information about 3D position in reference to each other. So if you have one app and it has two documents, that’s free game, fair game, they can know everything about each other. If you have one app and it has like an instrument panel or something, then that also is fair game. You know, one of the things that people have said in the human interface guidelines and the tutorial videos and workshops is that window management is surprisingly onerous. So having multiple windows that people have to attend to and look at and think about the positioning of and things like that is is a lot more sort of cognitively taxing than than you would expect and probably than they had expected at the time.
Speaker5: Oh, no, no, no.
Brandel Zachernuk: I’ve fully on that issue.
Frode Hegland: Fully accept that issue.
Brandel Zachernuk: Okay. So with that in mind, yeah, you get whatever you want.
Frode Hegland: Okay, because the problem with 3D, at least the way we used to think about it, I’m sure it’s the same now is 3D is messier. There are more places you can lose things in a 3D kind of a desktop. So I think that is a huge issue. I’m thinking more about questions such as me as a developer, can I choose the size, the proportions of the document? Does it have to fit in a default or can I make it any shape I want? And he said, okay. Because. This is where the rubber hits the road. And I’m sure even though we’re really talking a lot with both with Mark and Peter, the notion of beginning with the basics, like you have a word processing or whatever it might be here or PDF. Or sorry, Mark, I’m just trying to be funny or whatever piece of traditional nonsense here. You know, we have this dream of other stuff happening. So within the vision OS things that we’ve discussed, such as you have a PDF or word, whatever, I’m not I’m not going to try to be too controversial and you do a gesture or a command. All the known images can then be put in another space. So for example. Architecturally, that’s possible, right?
Brandel Zachernuk: Yeah. Yeah. You would need to be making special, special requests for the raw hand movement by default. You’re just getting information about what they’re calling Taps. But our pitches in the air that relate to where your eyes were pointing at the moment that the pinch went, went down. And that’s the standard model for interaction within the platform.
Frode Hegland: And that’s great. In vision OS Can you define what’s real walls in your room are.
Brandel Zachernuk: I believe that it recognizes them and will default to like it makes them sort of more readily snappable you can you can snap windows to walls. You can snap snap documents to align to them via sort of hysteresis. I don’t know that you have the ability to make specific elections about what semantic meaning surfaces are, but there are a lot of.
Brandel Zachernuk: A lot of smarts under the hood that are responsible for trying to make determinations like that.
Frode Hegland: I think that’ll be absolutely crucial at some point in terms of quote unquote, window managers, because imagine, you know, people who invest in these headset are not going to use them only in one place. Maybe the first one, because it’s so damn expensive. But down the line, you’ll have your office, you’ll have your home, etcetera. Right. So I really think that.
Frode Hegland: Okay, so as I’ve told everybody a million times, I have a 27 inch display and I absolutely love it. But often I’m on the 13 inch and when I go between them, it is a bit of a mess. It’s not very well done at all. You know, fine. It’s not a huge issue. But I’m thinking about in a virtual environment, if I have really used the room to have information for me, like Peter talks about memory palaces, if let’s say I have my library there, I have these kinds of books there. When I go somewhere else, the layout of the room will probably be different. But if I have semantically been able to say, I consider this right wall, I consider that gallery. Then when I go to that other room and I open the same environment, to have these things snap to that will probably be very useful.
Speaker5: Yeah, absolutely. Yeah.
Brandel Zachernuk: The. It’s the first product in what is likely to be a pretty long journey.
Frode Hegland: Yeah, it’s going to be very interesting. Sorry. Bruce just sent the message. Uncle. Well done. Reply. All right. I’m trying to get Bruce Wayne to get in touch with Bill Atkinson because apparently in all the published stories of HyperCard, he was inspired by an acid trip. Nowhere can I find if he was aware of Ted’s work or Doug’s work or anyone else. I’m sure he was, but I haven’t been able to document it. So in the middle of a thesis to say, and suddenly there was acid and we have HyperCard, it’s just not really good enough.
Speaker5: So let’s say if you customers. How are they? Good. Yeah.
Frode Hegland: It’s an exciting time. But mark, our discussion earlier. Mark and I were polarizing each other, as we often do, into whether PDF is where information goes to die or to live. I think it’s very interesting with the different timeframes of what that means. You know, I think it is very likely to exist in 100 years. Most other formats not, but of course other formats are much more interactive in the short term. So of course we should try to work together to cross those two.
Brandel Zachernuk: Yeah. I mean, I think the challenge is that what?
Speaker5: It’s easier to talk.
Brandel Zachernuk: Past one another about what it is that is or does, and to lump too many of the characteristics of versus other things together. So if you have the ability or tendency to collect hypertext documents in the form of in the same way that people collect collect PDFs, then I would argue it has exactly the same kind of characteristics of durability. And because it’s sort of human readable, first it, it actually has a higher likelihood of remaining legible. The downside of hypertext in HTML form is that they tend to be transmitted and people tend to be pulling them down from Web servers. And there are hyperlinks to to various websites that go down and you end up with link rot. The link rod is a characteristic of the ownership model, not the file format. Yes. On the on on the flip side, PDF is it’s a binary format, it’s much more arcane. And so if the last surviving runtime that I was able to parse PDF in such a way as it becomes legible to the screen goes away, then even if we have, you know, variable gigabytes of it, then we would need a pretty perverse seeming kind of reverse engineering process in order to be able to kind of pick apart what the heck the folks at Adobe were thinking through all of the layers of accretion that have accumulated over the format as a consequence of it trying to chase every twist and turn in the fashions of what digital documents actually represent.
Brandel Zachernuk: So, you know, I think that one of the things that you need to consider is the balance of risks in that regard. Whether link rot, which is a huge problem, is more of a problem than, you know, the opaqueness of the format and the fact that the other aspect of it, like I said before, or like I meant to say, I don’t know whether I got it out, is that PDF is first and foremost to service being a print preview. And you know, like, like Mark was kind of intimating like many, many other people who have been responsible for producing PDFs have been much less respectful about the, the mechanism through which they get from having something that was at some point an intrinsically digital document to something that is supposed to exist for screen. So people do all kinds of crazy stuff from, you know, arguably rendering bitmaps is is less harmful than when people have the completely mangled sort of logical order of text fields that have been kind of nevertheless arranged like sequentially on screen. I’ve done experiments like that with with HTML where I’ve sort of gotten text and I’ve randomized the semantic order in the document, but then used various arrangement tricks to make it sort of appear sequential just to be obnoxious. But after I did that, I realized that is what people like Facebook and Twitter do for advertising in order to make sure that people aren’t don’t have the ability to parse and recognize when something is an ad.
Brandel Zachernuk: And so every time I do something obtuse for my own entertainment, I realize that people are using it for abuse on the Internet. So like, yeah, there are people who do that and don’t know what motivations they have. If it’s simply was the most easy thing for their, you know, their file exporters to do or if there’s some some level of contempt, I mean, if it’s Elsevier, then it’s hard to tell. But yeah, like. I think that, you know, the long and short of it is that those duelling concerns like which which is the more important and what kind of controls do you have over it? Like if you can guarantee and moreover, if you could convert unfriendly PDFs into friendlier ones, then maybe a lot of those issues go away. But on the other hand, if you have the ability to actually store stuff and maybe that’s what is is talking about with with solid is the the tendency or ability to store documents then then maybe like as you as you download, as you browse them and things like that then then maybe that’s that’s actually a solution to it. I don’t know. Sorry. That was heaps. But hopefully I’ve addressed everything.
Speaker5: I found this book.
Frode Hegland: One of that kind of drier old ones. And it has this bit by none other than my main advisor, Les Carr. It was his PhD thesis. And the coolness of it because it kind of destroys a lot of things. Concepts is he created a system whereby when you author it would be in latex. It’s kind of old. You do the tagging and all that stuff. When you export, you exported it both to PDF or HTML and with tags for hypertext reading. Very close to visual meters. Obviously, I have to write it into my thesis, but the whole notion of how you choose to take this stuff out and back again is going to be very interesting. And that’s why I haven’t told my wife yet. But I’m spending $5,000 to fix author that, particularly for the map view because it isn’t. Good, Then what’s the point of putting it in the format to reconstitute it if it isn’t truly interactive? And that’s why, you know, getting our hands on this equipment will be very different from mock ups. Yeah. By the.
Speaker5: Way, just so you know.
Mark Anderson: Frode. That’s hypertext, too. I’ve got one. These were two conferences in UK about. Yeah, that’s the second one.
Speaker5: No, I got both.
Mark Anderson: The other one. No, no. Which I picked up moons ago because it’s early stuff which of course never, it never went digital. Um, and the publisher’s gone and everything else. But if I could just circle back, it’s interesting because I think there’s potential for misunderstanding my sort of my, my take on PDF is the more I look at it and from the current work I’m doing, it’s much more of a social problem than a technical one. Yes. Um, a big polluting issue. Is that certainly in the academic space or the intellectual space you might say is there’s an awful lot. The PDF is the quantum of measurement of your advance or your, your, your commitments. And I’m seeing this because I’m working in something at the moment, is trying to build an infrastructure for supporting the data that sort of goes into some of this funded research and we put all this money into it. And if all we get at the end is a PDF, well, that’s not much because there’s a lot of interesting stuff not being being reused. Um, but what I’ve discovered in sort of talking to the group of mineral chemists, it’s the thing about which I know nothing which is actually rather useful because I can’t understand it because I just don’t have the background in it. But so everyone, everyone knows what their what their sort of their output is, but they can’t actually describe it to me. And if pushed, the answer is always, well, it’s in the paper. But I don’t see how that’s possible because when I look at some of the papers, the answers definitely aren’t there.
Mark Anderson: And that’s that’s sort of the problem, which is why my sort of feeling is that if the if the paper was an output genuinely produced off the underlying data rather than a written manuscript loosely based on the data. Then. Then then we’re off to the races. Because in a sense, I don’t have a I don’t have a particular beef about sort of PDF per se. It’s going to be in some format. But what I’m seeing is, is the sort of disconnect and our systems are now cleverer that we ought to do this. The challenge, ironically, is that it’s far more of a social engineering problem than it is a technical one. We’ve got enough technology at the moment that I mean, it’s worked, done. It needs to be done by skilled people. But it’s I don’t see a challenge in that. The big challenge is sort of basically shifting people’s perspective to say, you know, this is my paper. This is this is the value in what I do, as opposed to saying this is a report based on the valuable stuff that I do, which is actually the output. For instance, from which the maid. It’s a subtle difference, but it unlocks so much more because pondering from the talks we’ve had over recent years is that now that non-human actors are major readers of our information, I find it slightly hilarious that we even think about producing stuff designed for human print reading as our as our way of manifesting that information because it actually works against us.
Frode Hegland: I think we need to define a little bit what we mean by the word print. It used to mean print to paper. Now it means print to PDF. So it is explicitly printed digital. And even one of the most basic features of PDF, of course is internal linking. So, you know, table of contents and all of that. So yeah, it’s very basic, but it is moving towards using some of the more architectural affordances to use that language. One thing, though, is very clear is having gone through doing the actually this is kind of super relevant. Let me see if I can show it to you.
Speaker5: And let me.
Frode Hegland: Say how useful my library is now.
Speaker5: And that’s pulling it to the test. What’s that?
Brandel Zachernuk: I love.
Speaker5: Having a good live.
Brandel Zachernuk: Live fire test of things like that.
Speaker5: There we go.
Frode Hegland: Journal living. So I’m going to show I think some of you have seen this. I think, Peter, you’ve definitely seen this, but there is a big point. So this is to our publishing, right? So.
Speaker5: Here is our book.
Frode Hegland: This is. Oh, no, that’s the wrong lip. This is. Yeah. Fota edition. So this is important. The difference between the right and the left side of the page. So what I’ve done is every article.
Peter Wasilko: Green team.
Speaker5: What’s that? Uh oh. Right.
Frode Hegland: Oh, you’re all talking about waiting for Brandel because he has to see this. So on the right hand.
Frode Hegland: No, no, don’t worry. On the right hand is the first page of the actual article from future text. Three. On the left is an summary. So the idea is that you can skip through and you can always look at the summary if you want to, but if you want the humans initial presentation, sometimes better, sometimes worse. Always on the right. Well, something happened there.
Frode Hegland: So I’ve been experimenting. Some of them are well, sorry, this is a Q&A. So it’s attempted to summarize each person for the Q&A.
Speaker5: Which is tough.
Frode Hegland: And that one is a bit sketchy. But for something like this, it’s actually really impressive. And ChatGPT is actually really good answering questions too. But we obviously have the issue with sources trying to find out the first word processor that had a link ability. We could type a URL and it would become blue and you could click to launch it in a browser. According to ChatGPT, it was word 6.0. According to the internet. No one knows, right? It may very well have inferred by looking at different versions, but, you know, crazy. So yeah, the reason I’m showing you this is. I will be our copilot. There’s no question on many, many different levels. Like Les Carr, he says, shouldn’t be called artificial intelligence. It should be advanced it because it’s so many different things. Right. But finally, after showing you the summary on the left and the first page, as you know, here at the bottom right, there’s a citation number. If you click on that, it launches the original. Oh, this is a library issue. We’re going through things. It should launch the original documents to that bitch. So the whole point of. What I want to do moving into VR is to use the extra dimension and extra space that we have. But it becomes easy to just have a lot of text all over the place. Like Soft space is hugely impressive. But until he has better ways to do window management or node management, I’m going to be really difficult to think in there. So that’s why I think what you were saying Brandel about that becomes even more important in.
Speaker5: Rich environments. Yeah, it’s.
Brandel Zachernuk: Something that I’ve been sort of pushing on. Designers should probably make that clearer to them. Every opportunity I get, I try to to evangelize, changing people’s perception of animation from being static in the sense of keyframing this goes to here to being constraint driven simulated dynamics, we call it, where you say impart a force that will result in this moving from here to here and applying an acceleration force and braking force or something like because those things can be generatively combined in a way that creates a meaningful sort of resulting space. Because if you have these constraints attractors and repulsors and things like that, then they can be added, whereas simply not meaningful to say, simply add these two keyframe systems together unless you sort of pretend you backfill those constraints. But, but like you say, like you need those things in order to be able to align things and fill a space in a way that that has some kind of semantic meaning, something that you were saying, Mark, about, you know, it’s all in the paper. It made me think about the closure over a document. And, you know, one of the things that you were saying about print not being print for paper, but maybe print for PDF. And in that context, what I think print then means to not to get even more abstract on it.
Brandel Zachernuk: But of course, well is that print there just means provide a hard closure over the the inside versus the outside of a document. What what represents the distillate that you want to say represents this thing. And one of the things that it sounds like you’re sort of getting at, Mark, is this idea that like, why do we have to have that hard boundary? Why do we have to have the document and the the the edges of what was put there versus the entire corpus that actually happens to a like to to to have contributed to its genesis in the first place. Like the like storage is cheap enough that you could have. Like video or like screen logs of every single action that somebody took on the computer up to the stage of pressing print on that document. And and that’s weird and invasive. But taking it as a model from the outset of what actually a document constitutes gives you a better sense of the way in which you could reach back into lots of stuff about the process whereby some argument or knowledge is said to have been created and be able to kind of scrutinize and inspect it. And maybe that’s not something that you give to everybody.
Brandel Zachernuk: But if nothing else, it will be an interesting thing for one to have of one’s own work. And yeah, so like I think that the, you know, the combination of the ubiquity of. Digital technology use for creation of knowledge, as well as the sort of the cheapest free aspect of the data storage related to that means that there is an interesting thing to, to to to interrogate there. And in terms of what the boundaries of a of a knowledge system or a document are, um, in the context of a library of citations and things like that, I think that they’re all pushing on a pretty similar thing of just like what happens when you store everything. What do you need to do with the information management and what are the artifacts that can be then kind of constructed out of it as well? Um, I’m really excited by like my wife read a book on Editor of Genius by Max Perkins a while ago and he’s a the guy who like edited Lawrence of Arabia, which sucked for many, many, many years before somebody took to it and was like, Let’s turn this into a book, shall we? Um, and yeah, just the fact that we only ever see finished books gives us a wildly false idea of what it is that a book is.
Brandel Zachernuk: I was talking also to my mother in law. We got an iPad for sculpting recently and I was talking about the difference between digital and traditional media sculpture. But when you go actually see like Rodin’s works or some sort of second string garden where people have stored all of the other like practice stuff that he did, you realize that that hard media, like the good sculptures are normally like the sixth or seventh go around and that it doesn’t like looking at that sculpture doesn’t constitute the practice of work that that went into making something look go like that. So I think having a capacity to kind of draw on that much wider corpus and and play with it is a really fascinating but actually really confronting possibility because we’re so used to falling so much from The View because of, you know, on one sense, just because of the experience of like there’s only so much you can store this damn book. But now that we don’t have any sort of physical geospatial constraints related to just storing everything you possibly might maybe did in order to produce this paper or this book, I don’t know. I find the possibilities a little bit dazzling and dizzying.
Frode Hegland: Yes and God almighty. Lots of comments on that, number one.
Frode Hegland: Editing is much harder than creating. You know, that’s why I’ve done this feature because it’s too much. And I’m consciously thinking, you know, yes, it’s fine here. But when we make a bigger space, of course we have bigger opportunity, but it’s also bigger opportunity for mess. So that’s one thing. In terms of the early things. One of the deejays I listened to a lot when I work. Her name is Hannah, and I finally looked up today who she is and turned out she used to be a guitarist and gone through all these creative revolutions and now she’s become an electronic musician. Anyway, her frequent collaborator is Grimes, Elon Musk’s girlfriend or ex-girlfriend? I thought that was a bizarre connection. And then going back to your kind of physicality and that’s so important. Do you remember Mac OS eight? I think it was where the menu at the top, it was really easy to tear things off. Right. And that really made me think of some interactions should be hard. Right. So I can imagine for something like author just to live in this world. Imagine you have your main writing space and you can move it, but it’s a bit heavy. It’s intended that this is an important thing. And then you have your graph and maybe. The way you’re touching an institution or a person feels different. Right. As you know, I’m a gamer battlefield and I’m still baffled by the fact that you can have two different machine guns, little animation and little thing in your hand thing, and they feel completely different. How is that possible? Right. So if we take that kind of thinking into our knowledge work, we just know that you put your hand into the map. But you can feel what stuff ‘is’ right. Maybe over time we can have temperature or something is literally hot, Something is cool. It’s there’s such incredible potential because text in and of itself is super important for its symbolic and grammatical potential, but it’s also incredibly lacking in its information at scale. So that’s what to me is so exciting about rich environments because we can put that in there.
Speaker5: Theatre, please.
Frode Hegland: Good timing.
Peter Wasilko: Yes, Brandel, I was wondering, are there any gloves with haptic feedback that are compatible with the Mac currently?
Speaker5: Um. So.
Brandel Zachernuk: Haptic feedback as it currently exists is very much a niche sort of part of any any person’s VR ecosystem. I would argue that part of that is that for all of its commercial success, all of the Quest devices to date are pretty squarely focused on gaming. I mean, Quest Quest Pro is has some nods to it. You know, they are trying to show it in an office context and things like that. But it’s actually not it just doesn’t work that mean for like correct me if I’m wrong, but like, it’s just not that thoughtfully integrated with people’s workflow.
Speaker5: It’s rubbish.
Brandel Zachernuk: Vision Pro is. Yeah. Sorry, Vision Pro is meant to be an actual computer or, you know, a device that you have apps with and other things like that. So it’s the sort of thing that maybe people would want to play with, you know, creating haptics for. But like, I really don’t think that there has been a device out in the public in the hands of the public yet that has kind of stimulated the the recognition that that actual haptics would be really great. And also all of the controllers to date have had haptics in them as well. So every time you pick up one of those ones that since the get go, they’ve had sort of buzz feedback. And most people who have been sort of in the space, again, largely for these coarse grained games and things that are just excited to be there for the sake of it, have considered that to be adequate. And, you know, that’s that’s disappointing for people who have better discrimination for what what the platform might be for. There are things like so there’s a company called Ultrahaptics or Ultra Leap now it’s Ultrahaptics plus leap motion and they have an ultrasound based actuator where they have focused they have a focused focusing pad of multiple ultrasound speakers, such that they have they claim, the ability to create the wave pattern interference, the wave interference pattern to be able to focus ultrasound tactile feedback on certain surfaces. And that’s a commercial, you know, consumer company. It’s likely that it works with Mac. I’ve never had or bought or played with one. They seem cool. I’ve used the other side of it leap motion and that’s what did the leap motion text editor with many years ago on actually on windows.
Speaker5: That might be.
Brandel Zachernuk: A dream so.
Peter Wasilko: Touch typing.
Brandel Zachernuk: Uh, so one of the things that people are realizing with touch typing is that you really need the tactile feedback, even if it’s, if it’s subtle and nobody has a really good solution for that. There’s a lot of pretty zany half baked ideas that come through the the hulls of things like the user interface and Software Technology Symposium and to a lesser extent Chi. But no, nobody’s got anything that they’re super ready to be able to have. Obviously, you know, Vision Pro is not out, and so nobody except Apple has had really the ability to make anything that works with it. Um. You know, people who have their hopes and dreams set on selling some kind of amazing haptic feedback device may have the ability to kind of glom onto that as quickly as possible and get it out the door. But I’m not aware of any of that at this point.
Frode Hegland: So, Brandel, obviously, Apple is spying on us and listening to us. The next version of the Apple pencil will obviously have to have haptics and will obviously have to work in vision because pointing and tapping is fine with our reasons. Tools were invented, you know, our hands need something to hold, just like you said with the keyboard. Typing on a keyboard in VR is much easier than, you know, doing it like this. So at some point we are going to have Apple pencil type tools and we’re going to be able to interact in different ways and that’s going to give us an amazing level of immersion, I think. Anyway. Sorry, that’s just Apple corporate secrets. I just leaked there just by saying it’s inevitable. They have to do it. Someone has to do it. How? I don’t know. How are you going to locate that thing in 3D space? Maybe they have, you know, purple tips. They always know where it is.
Speaker5: I don’t know.
Mark Anderson: It is interesting. Just the act of funny enough of having a tool like a pencil, you know, a thing. Um, in a sense that or certainly, say in some Western countries we’re so attuned to that as a sort of, as a kind of input device and general prodding stick and whatever, that somehow almost feels easier than using, you know, mark one finger instead, partly because we don’t point at people. We’re asked not to, you know, so we use our hands slightly differently. That which is sort of sits at odds with the idea of having all this extra punch in motion. But, you know, the fact of being able to pick up a this is an iPad type thing, but, you know, having having just manipulating, using these to manipulate this is also interesting.
Speaker5: Yeah, absolutely. I mean, I’ve been.
Brandel Zachernuk: Thinking about it in the context of we were so the reason I was away was not out of spite, but we were in Japan for three weeks. It was a lot of fun. And my daughter doesn’t know how to use chopsticks yet. Just Tokyo. We just hung out in Tokyo and just walked around and, like, walked like 15km a day and 37 degrees. It was sweaty, but we made a lot of use of the vending machine for drinks there called drink vending machines. But yeah, so like, it was really interesting thinking about what happens with just that talk. The leverage multipliers that are that are pointing device has the fact that you can make these relatively small finger movements and have a pretty large change in the end effector that other way. That that would be holding. So yeah, it it has definite benefits in terms of the other things that people have done for haptics. Peter There’s a really interesting one in Germany that people have been using and actually there’s a guy who has a YouTube channel devoted to making them where you use um, strings that have, uh, that are on reels with braking force attached. Um, and so that you can grip something. And then the braking force on the back of your hand will stop you. And there apparently it actually works with Steam VR so that you can play Half-Life Alyx and then have your hand impeded as you try to grip things of various sizes.
Brandel Zachernuk: And apparently there are some adequate level of encoding within the game so that if you touch a surface, then you’ll have your, you won’t have your hand have retarding force, but your fingers will will be able to touch it as it sort of provide resistive things. And you can pinch hold a small thing versus a big thing, things like that. So that’s really interesting. The German company. So those are on the back of the hand. The German company actually has these sort of mounted in various positions and you can arbitrarily combine them. This is because Germany is really into making cars and making making, making making cars better. And so they have the ability to compose an arbitrary combination of these in order to be able to provide surface restriction for somebody. Navigating a virtual scene, which is, you know, initially ostensibly for assembling vehicles and being able to do virtual vehicle inspections so that you have, you know, you can you have your free movement here and then have a hard surface here and there’s a pliable surface here and things like that. So those are solutions, but they’re not, like I said, consumer ready, consumer friendly things. Do you have a company.
Speaker5: Name or a website.
Brandel Zachernuk: Kinds of things.
Peter Wasilko: You have the company name or a website for? Neither.
Speaker5: I don’t.
Brandel Zachernuk: I’ll look for the YouTube channels for both of those things mean that one is very likely to be from the Hasso Plattner Institute. Yeah.
Frode Hegland: Mean where I am with all of this is. The issue of, you know, Brandel one of the first things you did in our community was to show us and talk about this space data space and how the guy who made it thought that a lot of data was missing, but it’s just so more intelligible in 3D had moved 3D than in flat 3D, so to speak. I’m now really worried that text in this environment will become seen by the user as being more messy than useful. So that’s why I’m really concerned about what are the ways we can make it more tangible in the right way, interactive visual, all of that stuff. And that is the key reason I need to get my hands on this thing as soon as possible. Well, you talked about, you know, sorry, just really quickly, the Quest Pro, there’s one right next to me. I hardly ever use it because it’s a faff to get going. It’s exactly the thing. Yes. Sorry, Mark.
Speaker5: Uh oh, yes.
Mark Anderson: I was thinking of your point about sort of, you know, text. So it’s been illuminating just as it happens, you know, sort of exchange that’s interesting and horrifying at the same time in the Tinderbox Forum where this is, I think he’s Indian. It doesn’t really matter anyway, but he’s an author. And he was saying, you know, well, basically, I don’t want to look anything up. I just want to talk, you know, I just want to ask questions. So but he’s he’s got something that I haven’t tried out tried out yet, which is effectively you make your. Well, it’s not really a large language model. It’s like a local language model of so you get a whole lot of sort of say, documents about an app you use. And I was trying to explain to him the insanity that because being the person that’s written most of the documentation this guy is using, then I’m saying, well, it’s it’s already you know, it’s all out of date. And you could ask the question from a live human being who would not only give you a better answer, but actually would would answer the question you you meant to ask rather than the one you thought you were asking. But but but it’s really interesting to me at the same time. And it’s not because I have a thing against. I’m actually quite interested by what he’s doing with the AI. But it is interesting to reflect on just how quickly people have sort of seized on this thing that they want to be told by a computer what the answer is. That’s interesting.
Frode Hegland: Well, I mean, it goes all the way back to Socrates. He didn’t trust writing, of course, because it couldn’t answer back. Now we have opportunities for the text to answer back. You know, Mark, to be honest, I’d much rather call you and ask a question than to go to the database.
Mark Anderson: Yeah. I mean, it’s just.
Speaker5: You’re not.
Frode Hegland: Always available.
Mark Anderson: No. And, you know, I don’t mention the last. I’m hate to be absolutist about this because I find I find some really interesting aspects. I really do quite like this idea about actually, you know, I hadn’t really thought about it at all, about a tool that you use a lot and which is used enough that there is sort of writing about it. Um, that’s actually quite interesting. Basically making mashing up essentially your own model, um, to enrich it because I think it is, it’s fair enough that actually being able to sort of say, imagine something like Photoshop, which has a multitude of shortcuts and very deep menu system, just being able to say, how do I, you know, what’s the button for X? And I suspect actually these sort of models, that’s one thing they are very good at because there is a finite answer to it, apart from if you ask the you phrase the question completely wrongly, um, I’m not so sure it’s so good at the other things. But again, it all boils down to what you want. And of course there is this persistent thing that we’re all urtu, which is our own time, is incredibly valuable and anybody else’s time clearly is not, because their job is to not waste our time.
Speaker5: Yeah, but.
Frode Hegland: Here’s an interesting thing. So in the paper that these geniuses wrote, they divided hypertext into seven systems. And two of them that are really thoughtful for me. There is essentially hypotext for thinking as a tool for thoughts. And then there is a hypertext for publishing or hypertext for knowledge representation. So the second one supposes that there are answers. And when Dave told me for the rest of you, this is Mark and Dave’s work. So when Dave told me this a few weeks ago, it was for me a huge load off my mind because systems for thinking and systems for publishing are very, very different. You know, this is why I approached the map in author as defined concepts and not export its glossary to different things. Same underlying stuff. Right. So what you’re talking about now really goes into that murky territory in some areas, like learning Photoshop, there is a defined truth. It’s this button or this procedure. So then when you get into the whole tools for thought, there is no pre known answer. You know, there is a process of learning and that becomes really very interesting down with the interaction affordances. And that is why I’m. But currently really scared of the potential that VR gives us and that Apple is very good at limiting our abilities because they save us from complexity nightmare, which is great. But at the same time, if we in the community cannot demonstrate that there is a way to tame the complexity, you know, a lot of this I feel will go underused and I hope somebody contributing to the community will have some brilliant insights on this. There should be no reason I can’t have 10,000 pieces of text in front of me. And interact with it like in Disney’s Fantasia or Fantasia, rather, to find what’s relevant and how it connects. But it’s so easy for us to talk about it. But once you start playing with it, something else.
Speaker5: Oh five mark.
Mark Anderson: Okay. Put my hand up because I’ve got some things that sort of link back to what you were showing at the beginning. I just didn’t want to sort of derail this, this riff if we had further to go with it. It’s just a couple of things that sort of chime with the thing you were raising. Well, a about the sort of stretched texty stuff you were doing and B about, um, the sort of map. So if I can share for a second. Sure. Okay. Um. Right to show you, first of all, to show you the, um, the stretch text stuff. So this actually, this is me tidying up that this is a blog that’s live now that I the framework in which I wrote back in 2010 for TED Göransson and he quite and he’s doing stretch text so well he folds his headings but he’s now got um and these markers you’ll notice the color in there because these are these are themes based on the link type within the source tinderbox document. But they’re, they’re stretch text. There’s some tidying up to do here and this really should be SVG and not artwork and stuff but it’s quite cool and we can have we can we can have nested stretch text here. And I think when we were speaking earlier, Frode right at the beginning of the talk, it was, you know, the thing that stuck in my mind when you were talking about wanting to hide stuff away is partly that, but by any other means you were sort of talking about, um, um, ad hoc stretch text.
Mark Anderson: So and this is one being done with a, with an image that’s an image in there. And I mentioned to you earlier that he has a notion of teaser text and that’s used a number of different ways in this thing. Anyway, I just show you that for what it’s worth, actually, I’m still working on this for him. We’re just rebuilding it. Um, the main thing is ten years, ten years or neither of us can remember how we, how we made it in the first place. The other thing to show you is you were talking about the map. So, um, for those who are unfamiliar, this is a tool called tinderbox. And where I spend a lot of my time, it’s a kind of tool for thought. And what this this is just one of its maps. Um, the grid is nothing particular. Basically. It’s a, it was, it was based on an article of literary likes. Well, there are no likes. This is all people who disliked one another and I’ve colored them up for other links to show. Um, green. Green people only hate other people. Um are they’re only hated. Red is only hate other people and blue both hate and are hated. Now I can actually the red the red lines are just a link type and I can actually sorry there’s a control here.
Mark Anderson: I can for any given link type I can actually turn it on and off. But separate to that is in the hyperbolic view which Tinderbox has. It’s just been reworked. And I was thinking whether this offered any ideas for what you were talking about in terms of layout. So this is working on a, um, a hyperbolic surface, which is not quite the same. It’s not a sphere, but it’s probably for lay folk like me, it’s as good as it, you know, that’s a reasonable description. Don’t worry too much about the visuals of this sort of the shape of things is, is is not designed for that sort of aesthetic. Um, unfortunately, in this particular I just tried to pick something simple, but what I can do in here is, um. I can I can select or deselect a particular link type. Um, and what I don’t have to hand is a good example where I have lots of link types already set up with different things. But if you were doing argumentation or it might just be in the current map, you have different sort of groupings, then you can effectively toggle these in and out. Um, and you can either show direct lines of association or you can actually put cross links. And I don’t think there are any in this particular document. Um, the cross links basically means in order to cut, in order to leave these out, because what we discovered is once you get into a richly linked document, um, like my ATB document, which has thousands of links, it just there are so many.
Mark Anderson: It’s just really hard to plot in a tool that’s not a, it’s not a network plotting device, so it’s not a GIF or a excel or something. Um, and the other interesting thing that came out of it is that so essentially what you’re seeing here is you, you can select a pivot note to it and it’s drawing um, links in this particular tool are directional, but it doesn’t mind about the direction. It’s basically can it get to anything else in the document via a link in this case via a link of a particular type. And if I had other ones, I could. The big thing I’ve got engineered this time is I’ve got the guy who made this is to allow me to take a particular link. So in other words, I can follow an affiliation or a strand of annotation through a document which I think is really interesting. I haven’t I, I’ve wanted this for about ten years and now it’s arrived. I haven’t got a really good document to go and play with it. My next job is to build one, to explore something. And another interesting thing that came out of it is because in my usual way, I through one of my big documents at it and everything ground to a halt is we discover that actually trying to plot everything is isn’t helpful.
Mark Anderson: So this this runs out I think to 7 or 8 links from the starting object and then it basically doesn’t plot any further. And you could say, well, why not? But you could always if for instance, you thought there was more here, I can, I can I can make this the pivot point and I can go on exploring, which when I first saw it, I thought, you know, I’m not it’s a I’m not being shown the whole picture. And I actually thought, no, no, for sort of human level understanding. So I’m talking to your picture, not the camera, your pictures all over here. Um, for for a human level understanding of it, this actually is actually quite nice. Um, it just shows it’s Mark Bernstein who thinks about this stuff a lot and has been doing so for, uh, 30 plus years, so it’s hardly surprising. But anyway, I’ll just show that to you in case that sort of it’s not that I think you can make this. I just think there’s some of the, the concepts there are interesting. Um, and again, this is, this is to me on one level, it’s only a different version of this. This is an entirely manual. I mean, I’ve just arranged in a grid in this particular thing, but these are all, these are all movable, all interactive objects.
Mark Anderson: And effectively each of these is a note within this device. So and in in a rather Engelbart sense, it actually has all these views available at any time on the same on the same set of data. Um, which is interesting because it’s actually quite I hadn’t realized actually quite how unusual it is. I mean, I know word has an outline and a sort of print view, but that I see that slightly differently. And that’s not, that’s not to be snippy about it, but the views that there are in most tools are not significantly different. And one of the interesting things is teaching people to use this is to get them to understand that, that you can use the other thing, the other view that you’re looking at is actually the same data and they sort of look at you somewhat doubtingly that, you know, there’s there’s obviously some some nasty surprise. But but it is just a different, um, show of the same thing. And again, this is what you’re doing Frode with author you know you’ve got effectively your, your your text and you’ve got um, elements that broadly drawn from or linked to the text which you’re showing in a different way anyway. I just show those things for what they’re worth in case that’s of use and I’ll stop sharing.
Frode Hegland: There is a very interesting difference between that approach and and author, which you brought to my attention there. So an author. If I tap on someone, Les, then you could say, you know, it’s not a one way relationship. You can have links to many depending on the.
Speaker5: Well on.
Frode Hegland: What generated it, right?
Speaker5: Oh, sure.
Mark Anderson: I mean, the fact.
Frode Hegland: No, but what’s this is what’s interesting about what you’re showing me, because this is very good For one thing, it’s very clear is a fan of the musical Hamilton. Right. But none of these have direction. Right. And what you showed us direction. So it becomes very interesting. At what point would what user need to assign directions or definitions? What is the useful attributes that a user would like to assign to this concept or node or whatever?
Speaker5: Because I think the.
Mark Anderson: Designer of this, when he made it, it was I think it was probably a pragmatic decision at the time, which at times he said he sort of regrets because the world decided that the whole idea is that it should be too directional, but actually you can traverse in the built in automation of that tool, you can traverse the links in either direction. They just they just have a direction. It’s partly to do with the way you create them. And it all came to a head when the Cult of Rome turned up about two years ago and claimed to have reinvented hypertext. And, you know, quite not quite, it turned out. But, you know, I sort of understand why because, you know.
Mark Anderson: No, no. But in fairness, you know, someone had found something being sort of, you know, rather left in the corner and dusted off again. But your point about directionality is interesting, actually, because this is one of the problems about a lot of things at the moment. Take a bunch of information, make a graph out of it, and then say, there, you know, it’s the obsidian graph. This is the answer. Well, no, these are the links that you created and the links you didn’t create, aren’t there? I think it’s been.
Frode Hegland: It’s so important, Mark, when you look at so many scientific kind of companies websites, they have links and graphs in the background as an image. There’s never any information there. You know, and to see that, of course, it’s useful to see clustering at some point. Of course there.
Frode Hegland: But to build truly useful interactions in this, I believe that what has come out of this conversation today for me at least, is the acknowledgement that A. Academia is based on rectangles paper. You know, last couple of hundred years, our progress has been based on this. Hypertext makes that box smaller. But, you know, it’s basically screen based, roughly speaking, right? And it’s connected by links typed or not. So what we’re saying with these spatial computing spaces, to say politically correct is we’re going into. A new substrate, and that substrate is arbitrary sized rectangles that it’s still useful to think in terms of rectangle, not always, but also a huge volume. And that I think that is what makes it a different medium.
Speaker5: Right? Yeah.
Mark Anderson: And it’s interesting watching people work with maps. Again, my my reference frame space is tinderbox, but I think it translates more widely is that for some people it’s really literal. So if the screen if the text in the box isn’t, you know, the only text that’s there is the text you can read. And if it isn’t on the screen, it isn’t there. Whereas to me, when I use it, I’m using it far more like as a mind palace. So this this box or this shape here is a proxy for all the information that lies behind it And what it, what it allows, what it allows one to do is to do some very, very fluid sort of associative work because you can just drag things that are that in a sense are quite large, close or far from one another. And you can use you can use sort of explicit linking or you can use proximity type linking or you could use some visualization, some visual manifestation. So the color, the shape, whatever to do that.
Frode Hegland: So yeah, you’re very right. I think Mark Spatial hypertext needs to be reinvented for volume because.
Mark Anderson: Yeah, yeah. And it’s a shame no one’s putting money in it. I mean, bless Klaus, bless him, is about the last man standing in the well, you know, sorry but in a, you know, he’s about the only sort of lab or thing that’s doing stuff and he’s still struggling. I think part of the problem is that it’s really interesting. He keeps looking at things like, um, recommender systems, like, you know, which movies should we go and watch? And I think it’s really missing it. I a, I have, I have doubt issues about recommender systems unless you have very mainstream tests. Yeah, it’s partly it’s partly about being given a big enough I think one of the problems that that some of his work suffers from unintentionally is that the people that you know, the funding he’s got has been too small. So classic thing they had a thing they did a thing with an engineering company and the problem was that it was small enough. The engineers basically knew the answers or thought they knew the answers, so they didn’t trust anything because all they all they saw in a slightly sort of unintentional Luddite sense was something that that told them stuff they already knew. And and they didn’t want to know the stuff that they didn’t already know. Whereas it begins to have some point when you get to something that’s larger where you can’t know everything yourself.
Frode Hegland: So the history of visualization of information, one of the really important pieces there is, of course, the cholera outbreak in London. And you know, the map with all the dots where people were sick and you know, what’s his face? John Snow went and broke the pump. It’s really important to acknowledge that that graph was made after the discovery. It did not aid the discovery. It’s a great presentation device for politics, which was very useful, but the act of making it helped him. So therefore, we already have the knowledge, representation versus tools for thought already in that medium, right? So when you’re talking about these engineers thinking they know if what is in there is too close to what they know, you know that I completely agree with you. That may be a big problem. And I think that.
Speaker5: You know, I was.
Mark Anderson: Yeah. All I’m not saying unless I’m implied otherwise, it just so happens it was an engineering firm. I wasn’t saying they were like that because they were engineers.
Speaker5: No, no, no. I wasn’t drawing.
Frode Hegland: Any inference from that. I’m just saying I mean, if you look at some of these earlier hypertext systems on the Mac or whatever with nice little screens, really fun, but to actually get stuff out of there, in many cases, just open a book. What was really important work. But the medium wasn’t there yet. There wasn’t enough there. And I really think we have to more rigorously separate the thinking process from the finding out process. They’re not the same. Often overlap, Of course they do. But you know, if you are a student told to learn about Socrates, there are things you can put together and it connects and it’s all very nice. But if you are a philosophy student trying to build on Socrates philosophy, you’re trying to make something new. That’s a very different mechanism you need that isn’t constrained by what has gone before, right? And this, I think, will be really.
Speaker5: Really important.
Frode Hegland: Constraints in the spatial stuff we hope to do.
Brandel Zachernuk: On the subject of the dearth of research with regard to spatial hypertext, people being able to kind of conceive of the necessity and the value of of doing those things. I would also point to the the the immaturity of both the devices and the context in which the most of the devices heretofore have been presented. The fact that people think that they’re first space aliens or videos means that they also haven’t provided application layers that are appropriate for the people who are more interested in the text than the specific, you know, Subpixel Anti-aliasing that’s necessary in order to make that happen. And it’s fairly rarely that somebody has the the capacity to have the the patience for both to the point where they can make a, make a dent in, in either. Um, and so hopefully that’ll change vision pro it’s something that I’m starting to think now is that might be worth. Apple and other people actually sort of showing up to conferences, maybe, you know, hypertext 24 or sort of approaching various universities and educational institutions with like, if we give you one of these, what will you do with it? That kind of thing. So, um, if there are really if there are good candidates for that, like, like I said, I can’t that’s it might end up being kind of my job ish. I can’t make any promises at this point, but it’s really interesting to go like, well, if we had the ability to kind of mobilize some resources in terms of like training, discussion, you know, you know, actual units, headsets and things like that, What would make the biggest dent on the public intellectual and academic capacity to recognize this as a fertile ground for exploration? Um, that’s an exciting thought. So, yeah, Thank you.
Frode Hegland: Just looking for in my old site here and I have a lot of link rot on my own site, which is awful. Um, so, so the thing I talked to you about earlier about what’s in this frame and what can be outside.
Speaker5: Um, on.
Frode Hegland: The Mac we have had published and subscribed. We’ve had all these mechanisms, we have applescript to talk between things. But if we’re going to really realize the potential of this stuff, it has to be opened up more obviously in a safe way. For instance, a thing that I developed yonks ago, 90s was a desktop where your favourite friends you would have icons of them on the desktop. They could be cartoonish looking like you or pixel or whatever, right? But they’re there and they have spaces. So let’s say you get an email from Vince. The character holds up a letter. Right. If someone has written a new article, maybe they hold up a book. Right. So you can glance at your friends and have a clue of what’s going on. Maybe one of them is traveling. This is all based on information they already published. So I wouldn’t say there’s any privacy implications here. Right. So that gives you a more immediate environment. Of course, a spatial computing environments have the ability to do that.
Speaker5: Right. But if it.
Frode Hegland: Is inside and up, even in this space, you’re not going to see it all the time. So this is where Apple’s been doing widgets on the macOS desktop forever. And I think they’re the stupidest things. But, you know, sometimes I try to scroll to the right hand side of the screen and suddenly something comes in and the idea is great. Of course it is. But in the world we’re moving into now, to have, for instance, you guys as the future techs team for me, always here, and to have some kind of visualizations of what you choose to to make public would be fantastic. But if that is only in an app that’s lost because that’s people, but there’s also the documents and the concepts and everything else. And also I would like to be able to literally say, okay, all the people should go sit on the mountain over there. Why not?
Speaker5: Right. You know.
Frode Hegland: So it’s based on the people like you say, anybody who has written about hypertext, why don’t you go up in the sky? Because that’ll be really cool. Nonsense. Stuff that people should be able to experiment with. If a the metadata is there, of course. And two, it’s open enough to share because the ideal unit of sharing and interaction shouldn’t be manuscript size. You know, remember open doc, you have a space, you bring the tools in. That’s the kind of revolution I hope I can punch some people in the face with Apple by doing stuff that actually inspires it. And then finally, on this bit of a speech, Edgar saw the first two episodes of Doctor Who last week. Know, I’m trying to explain to Emily why Doctor Who is amazing. And the point that’s relevant to this conversation is each Doctor Who episode is so-so. Some are brilliant. Of course they are. But it isn’t what’s in the episode or what’s explicitly there. It’s kind of what’s in between, like the story of Rose Tyler.
Speaker5: Or arc with.
Frode Hegland: All the special things. It’s amazing. It’s so emotional, so rich, but it can’t just be experienced once. I think that’s something we need to bring into this knowledge stuff to the understanding that a lot of the knowledge isn’t in the texts, it’s not even necessarily in the connections. It is literally in the interaction over time and what that makes happen. So so that’s why these interactions are so important and needs to be so open. Soapbox of Peter.
Peter Wasilko: Yeah, I’m very interested in seeing Bibliometric visualizations in 3D and I’m sort of thinking of the author map view. So I could say grab a citation and don’t just want to see that citation. I’d also like to see all of the other links that it cites Light up in one color and all of the sources that cited it directly or indirectly light up in another color so that I could see sort of the fan out over time going in both directions forward from the time of that particular citation and also reaching backward. You know, I think Vision Pro would be a great place to be able to do those kinds of visualizations. So I hope. Somebody at Apple maybe plays around and does a demo along those lines and also maybe dig up the old Project X visualizations and see about moving that into 3D.
Mark Anderson: Because, Peter, you there made me think of the thing that Adam sort of knocked up in all of about 20 minutes for me, back a couple of years back we did that sort of um, okay there was paper sites but, but that was based on the cytology idea and the, the link. The links went both ways. I mean, in his case, we didn’t draw the links because it created visual noise. But that’s, that’s, that’s a kind of visualization layer consideration. But it sprang to mind because I was sitting in the chemistry department in the current job the other day and somebody asked me, you know, what I’d been doing before and I showed him this, said what? So I could just do that with all my stuff at the moment. And I said yes. So he went off, scraped all in his case, all his sort of back bibliography in ex archiv or whatever it is. And he’s now using it. And his his comment was, why have I not seen this before? But that’s the first time anybody that I’ve actually anybody has commented really on it outside this group since that first came out. Sorry, Fred.
Frode Hegland: This is so important. I’m reading in this hypertext history, I’m reading about one system and in text, not in citation of in text, using normal words that say this system was inspired by this other thing.
Speaker5: Right. That’s a.
Frode Hegland: Human citation. So we should develop AI models to extract that, to do exactly what you’re talking about, Peter, because that takes a lot of unnecessary brain work to put together. But if a sentence actually says that.
Speaker5: You know, it’s it’s.
Frode Hegland: Clear it’s there. So, yes, being able to visualize connections.
Brandel Zachernuk: It makes me want to connect a couple of ideas together. One of them is that. A lot of the time people talk about useful kind of as though it’s this context, independent concept of something having sort of abstract utility absent a particular application. And, you know, that’s that’s rarely the case like I was listening to sort of unrelated, but I was listening to John Searle talk about why we consider there to be this convenient unity of consciousness. And it’s because we we rarely split apart or recombine as, as human entities. And so that’s why as a principle, it seems like that’s what people are. But that’s only because we don’t. And, you know, there are a number of life forms that are imaginable that wouldn’t have that as a bedrock thing. So even though it’s an abstract principle, it’s actually deeply contingent on the practicalities of what living as we do appears to be like that. And so in that context, like usefulness is contingent, there are some sort of relatively consistent sort of bedrocks for how we understand utility right now. Um, but yeah, like it’s challenging because a lot of people have some basket of uses and if something doesn’t match that, they say it’s not capital U useful rather than it doesn’t have the ability to kind of constitute a part of a work work that I currently understand that I need to do. Um, the second thing that I was interested in bringing up was kind of as you, as people have talked about various displays being kind of, um. Quickly falling over in terms of the complexity that that you throw at them by having like all of the citations or all of those things.
Brandel Zachernuk: Um, as well as your hyperbolic view only containing a subset of it and Tinderbox Theatre. Sorry, Mark. Um, it made me think about the, the value of incompleteness and the value of being able to be like, you know, one of the, one of the, one of the sort of mixed blessings of humans taking the time that it takes to do work is that you can check in on them and be like, How are you going with that? And they’ll be like, Oh yeah, no, I just realized there’s like 20,000 more links to draw, you know, and you can like, Oh, well, stop then. I didn’t know it was going to be that big. Whereas a computer will dutifully go away and many times not always be able to compute it instantly and then render you the illegible mess. And only if somebody put sort of human level checkers and be like, Are you sure it’s going to be 230,000 results? Um, that that you are given anything other than that sort of telltale hang of, of an operation that ordinarily needs seems to finish immediately but that, that isn’t doing well you know, and you could do that with, you know Microsoft calculator if you just take the 237,000th root of 25 billion, 260,000, you know, like like you get that there because they haven’t put those stop gaps in to be like, okay, we’re going to do this anyway. So, um.
Speaker5: I think those two.
Brandel Zachernuk: Things are related and that like we have. We have much more sort of small specific uses for things, and it’s actually really useful for us to be able to. Compose those piecemeal and see what sort of fractional steps toward them can be, and particularly in three dimensional space where it can become that much more overwhelming or it can occupy more of your space and things like that. It’s probably worthwhile for people to be able to take fractional steps toward those sort of immediately useful things as well as also from the perspective of utility. Like when you say Peter, that you want all of the citations, you maybe don’t want all of them, you maybe want enough of them and possibly a vignetting off of what that sliding window entails in order to be able to kind of decide, No, I really do want to see all of them. Or I actually was just looking at the little piece. So, yeah, I think that that sort of requires a more conversational, um, uh, uh, mechanism, not necessarily conversation, but where there’s a sort of an ongoing feedback loop of negotiation about what commands, um, sort of entail in order to be able to kind of progress with making some kind of composite view for being able to understand the context of your actions.
Frode Hegland: Really great point. Let’s see. Mark has his hand up. I just really wanted to briefly want to say that’s like the Golgi stain effect. Only 10% of the brain cells showed off in the stain, which made it useful. If it was more, you wouldn’t be able to see what was there. So that’s really quite cool. Mark, please.
Mark Anderson: Just as Brandel was speaking, I’m thinking I suddenly had the analogy that in a way it’s like the sort of fog of war in a game. I mean, if you could see the whole board, it’d be just too much. And anyway, it’s not pertinent at the time, but it’s perhaps interesting to see lots of things that are going off the edge of the map, as it were, because you might think, okay, I need to go over there and as I go over there, then I, then I’m shown more. Um, and so I think you’re absolutely right. And we lack. Well, I think it’s partly contingent on it in the same way that I think we we’re taking a sort of a suboptimal path in our in our relationship with with our emergent AI. You know, so we’re asking it to give us lots of answers, saying, hey, how can you help me do this? You know, what have I misunderstood? I mean, I really love this idea and I need sort of time to sort of play with things more. But I really like this idea of using it more for sort of, um, um, precision repetition. If for no other reason, that it’s one of the best ways you’re probably going to get to see what a non-human actor makes of what you’ve written. Which is something interesting I don’t hear people talk about very much.
Speaker5: Oh, I talked.
Frode Hegland: About that with a Russian scientist 20 years ago in Croatia. That was one of our big things when we realized that you should use summaries to help yourself to see how it’s communicated. Yeah, no, that’s really important. Sorry, Mark. Did I cut you off?
Mark Anderson: No, no, no, no. And I’m. I’m absolutely glad to hear what you just said, because I think we need to be doing more of that.
Brandel Zachernuk: So it also reminds me of a. Sorry. Go ahead.
Frode Hegland: No, no, no. Go on. I’m happy to wait.
Brandel Zachernuk: Something I liked on Twitter recently was a particularly cryptic statement, but I think it’s a really important one in this domain. Ken Archer said it’s misleading to call neural nets subsymbolic. They are still simple transformers, just not with any intersubjective meaning. And what that means is that, you know, people talked about symbolic AI back in the 1960s and before where you would have like the idea of explicitly encoded values for how big something is or how heavy something is or how fast something is. And then neural nets as a concept evolved and people called them subsymbolic in the sense that they aren’t, that you’re just sort of recapitulating a structure that’s based on some loose reading of what neurology tells us brains are. And what he’s saying is that those are still symbolic representations. And I’ve been thinking about what we mean by digital, because digital means digits, which means fingers. And, you know, calculus means little stones. And it’s interesting to actually reach back and think about the practical consequences of all of those things. Anyway, the intersubjectivity of the sort of fractional results of a ChatGPT and other things like that, being so obtuse and so opaque is really challenging. And I think in a way that means that we we need to think about what that means for what it’s doing for us because. Most sort of cognition, as we understand it with people, it has sort of partial, intersubjective meaning where you can talk about the progress somebody is making on a thought.
Brandel Zachernuk: And it might be a fabrication, but it’s fabrication that’s maybe worth doing the work on in order to make sure that these have turns, so to speak, that that sort of yield enough information to be able to decide whether they’re relevant. When ChatGPT is thinking, it’s just like, when do I want to stop you? Like how hard is it going to be and how can we get. Contextual information about what what kind of solution space, what kind of answer space you’re generating something that I’m really challenged by and really, really frustrated by in the context of, you know, this current so-called subsymbolic AI. And I think that it’s something that particularly where like, we need to have this, you know, this this, this conversation, this multi-turn interface where we’re negotiating the sort of the the idiosyncrasies of a space or a surface or whatever we’re going to call these knowledge maps and these navigations around latent facts and things like that. We can’t have any agent kind of run roughshod over that stuff without being able to kind of inspect, interrogate and negotiate like what is fair game to manipulate versus, you know, and yeah, it’s just a real big problem. Sorry, I don’t know if that’s related, but it feels it.
Frode Hegland: Hello, my name is Choir. Thank you for preaching to me. Um, I think what you’re saying is crucial, and this is why I think VR will be the potential to help us utilize AI rather than be used by AI because we need the bandwidth. We need to build shapes of knowledge, not necessarily a sculpture, but shapes and keep interacting with it rather than just voice or ChatGPT. So yeah, that’s ridiculously important. Um, two other points. First of all, Brandel, please. Magic at Apple to make, you know, the iPhone and iOS, whatever. You put it sideways on a stand with power and it’s in a display mode. Please let’s have that for iPads. We all have an iPad lying around doing nothing. Let it be plugged in, go sideways or whatever, shows the time or notifications or whatever. Just make it happen anyway. So I think one of the things that we’ve squeezed out as a community is the notion of and Marcus said this a billion times and thank you for doing so. Mark There’s either tools for thought as buckets to just put stuff into you and that’s roughly tools for thought for a lot of people. But then this thing that I keep being so excited about that Mark and Dave came up with, which is the difference between tools for thought and tools for representation. When you try to combine them, then you get something intentional. You know, you have to put it in here with the explicit process that this is expected for you in the future or someone else to actually understand.
Speaker5: Right. So that’s the.
Frode Hegland: Whole intentional side of things I think is really, really important to try to the fact that Mark, you and Dave split makes it easier for us to think about how to recombine. So I’m just so grateful for that.
Peter Wasilko: Peter Yeah, I’d just love to have Sidecar be able to work in portrait mode instead of having me put the other way around. So please make that happen. Brandel while we’re in our wish list.
Frode Hegland: Just do everything apple. Just, you know, take over Apple. Yes, I’m very grateful.
Peter Wasilko: Become the new CEO. Brandel. Please.
Frode Hegland: No, no, no. Then he’d waste his time on too many political things. I’m very grateful for today’s thought. Yes, Mark.
Mark Anderson: Just quickly. It’s ready for Brandel in the chat. I’ve just put the paper. This is my paper for Rome 23. So that’s what. That’s what. Because the other guys have already seen it. Um, yeah. But but but.
Speaker5: It was it.
Mark Anderson: It was interesting because it came from two. You know, Dave, my advice I fell into doing it actually having stuff from two different places. Mine was, you know, why do we keep leaving all these interesting ideas so rusting in the long grass? And he was sort of more interested in trying to sort of, you know, map out where things were still happening. And the the seven in it is a deliberate riff on how those seven issues for hypertext, which has a resonance in the community. It’s a bit arch you know we could have made and the point is that the splits are not deliberate or hard edged. You know, they are there for a purpose. I mean, I think the other really interesting thing that comes out from it is that is the way that narrative, which people probably see as the province of the sort of literary people actually is probably most making most traction in games. So people are thinking about sort of how narrative lives and is created and done in a in a digital setting. And that’s another really interesting point, because you wouldn’t necessarily think that’s where if you had to guess, especially in a room for people say, who don’t come from who don’t do anything to do with games, I bet you they wouldn’t think that’s where it lives.
Speaker5: Yeah, that’s.
Brandel Zachernuk: Fascinating. Have you ever heard of the game? Deus Ex? There’s about 2001 or so.
Mark Anderson: I don’t recall the name, but just about it.
Brandel Zachernuk: It’s the most complicated narrative writing structure game to date. It’ll be really interesting to find out what people know about its writing process and their views should track some of that stuff down. I’ll make a note of it.
Mark Anderson: You said Deus ex.
Mark Anderson: I can ask Dave because he’s sort of joined at the hip to all sorts of people in the gaming community who are, you know, who are who are academics in that space. And one of them probably know.
Speaker5: Cool, cool.
Brandel Zachernuk: Awesome. Look forward to reading the paper. And yeah, it’s been really, really fun reconnecting and I’m looking forward to being back.
Frode Hegland: It’s been a very good talk. I’m very grateful. Small group, which is great. I’m showing this screen again to remind you that we do actually have a journal and a book.
Speaker5: And yeah.
Frode Hegland: It is a huge challenge to make it readable. It’s too damn big. So that’s why, you know, I’ve manually gone through doing all of this stuff. So if you have other thoughts, please do tell. And what I’m doing today is because there’s only four of us, it’s easier with a transcript because it’s easier. There aren’t that too many people to faff about with. So I’m going to transcribe this call as as a test piece. But if you have further thoughts on the specifics of what our journal actually is or book or whatever we call it, don’t be shy. We should keep that. We do record everything. And today I felt quite a few specifically useful things were maybe not invented, but at least vocalized. That deserves access.
Mark Anderson: It’s really interesting, too, to hear the sort of the the discussion spiral out that we’ve in a sense already had about the problem, about transcript being not all we imagined it to be. So the first step is that you can turn it into something digital. Then you realize, okay, well that doesn’t really tell me so much because that’s how we speak and how we write a different things.
Frode Hegland: Yeah, absolutely, so that is a big issue in itself. Now it’s two minutes past and I’m very grateful. Glad you came back. Safe from the land of Japan. We had our Japanese family here at the same time, which was fascinating. First trip, two children, two parents. They were shocked at saying eating carrots raw. That was one of them. A big fish piece of salmon on the barbecue was a shock to them. They used a tiny pieces of fish. So, you know, it’s difficult to tell what’s what’s going to be new for people. But it was all very lovely.
Speaker5: Okay, if you have any in particular in Japan are these have a great week.
Brandel Zachernuk: Radish thing so yeah.
Speaker5: Yeah. Okay.
Frode Hegland: Daikon yes If you have anyone you think we should invite, please do tell me or please just invite. We need to get people in as soon as possible. All right. Bye, everyone.
Speaker5: Yeah. Bye. Bye. Bye.