Embedded

View Original

485: Conversation Is a Kind of Music

Transcript from 485: Conversation Is a Kind of Music with Alan Blackwell, Christopher White, and Elecia White.

EW (00:00:07):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Professor Alan Blackwell. We are going to talk about AI and programming languages, but probably not in the way you think we are going to talk about them.

CW (00:00:22):

Hi, Alan. Welcome.

AB (00:00:25):

Hi! It is nice to be with you.

EW (00:00:27):

Could you tell us about yourself, as if we met at the User Experience Conference that Marian Petre just keynoted?

AB (00:00:39):

Yeah. I have come into this field as an engineer. A lot of the jobs that I have done over my career have involved designing new programming languages. The first one of those was nearly 40 years ago. The systems that I was deploying, which in my early career were generally industrial automation or laboratory automation systems, I always found it super interesting to talk to the people who were using the systems.

(00:01:04):

Quite often I thought I could really help them with a job, if rather than just giving them a static user interface, I gave them some kind of simple scripting or configuration or programming language. So long ago, I used to talk to people about, "What is your job?" and, "How can I make it easier for you, by helping you to instruct a computer to do pieces of it for you?" So yeah, that has been a long-term interest.

(00:01:28):

But I am intellectually curious. I did a degree in philosophy and comparative religion. Then I heard about this thing called "artificial intelligence," which is a way of combining an interest in people, an interest in philosophy, and an interest in engineering. That was a little while ago. Some people are surprised that I started my research career in artificial intelligence in 1985, which is before many of today's leading experts were even born.

(00:01:54):

Basically I got into artificial intelligence because of an interest in programming languages. Over the years, I worked for corporate research labs and big companies, and actually deploying new AI tools. Often they included novel programming languages. So I went from programming languages into AI, and then out of AI, I became a programming language designer again.

CW (00:02:14):

Okay <laugh>.

EW (00:02:14):

And a professor.

AB (00:02:16):

Yeah, that was a byproduct, to be honest. I really did not think I wanted to do that.

EW (00:02:20):

<laugh>

AB (00:02:21):

My dad, who was a proper engineer, was really disappointed, "I cannot believe you are going back to school again. You are a good engineer." <laugh>

(00:02:30):

But things are pretty sweet here in Cambridge, because the university has got a very liberal attitude to intellectual property. It is super easy to start companies here and do consulting for big companies like Intel or Microsoft or Google or whatever.

(00:02:42):

So after I actually ended up late my life doing a PhD in applied psychology, in order to understand the cognitive ergonomics of programming language design, I thought I would go back to a commercial lab again to design more programming languages. But I discovered that by becoming a professor, I could work for far more companies and design far more languages. So I have really been enjoying doing it for the past 20 years.

EW (00:03:04):

All right. We want to do lightning round, where we ask you short questions, and if we are behaving ourselves, we will not ask for a lot more detail. Weirdly, I ended up with a lot of these for you, so we will try to go kind of fast, but do not feel any pressure.

AB (00:03:18):

Okay.

CW (00:03:18):

Would you rather have dinner with Bootsy Collins, Geddy Lee, or Carol Kaye?

AB (00:03:23):

Ooh, tough. But definitely Bootsy Collins, because I am a bass player.

CW (00:03:26):

<laugh>

EW (00:03:29):

Sometimes we ask when the singularity will occur. But for you, I think, what year do you think we will stop using a floppy diskette to indicate saving things?

CW (00:03:38):

<laugh>

AB (00:03:39):

Well, so many words in our language, we cannot even remember where we got that word from. So I think a floppy disc has already become just a kind of emoji. Like, I would be very hesitant to ever put an aubergine in my text message, because-

EW (00:03:51):

<laugh>

AB (00:03:51):

I sort of know that what it means is not what I think it means. So I think the same with a floppy disc. People do not really care what it looks like. They just need to know that it is a symbol and it means something.

CW (00:04:02):

Would you rather spend an hour with a well-trained parrot or a stochastic parrot?

AB (00:04:06):

I do not like parrots very much.

EW (00:04:08):

<laugh>

AB (00:04:09):

<laugh> Yeah, stochastic parrots are kind of fun. They do not bite you. They do not poo all over the place. So yeah, probably-

CW (00:04:18):

<laugh> Not yet anyway.

AB (00:04:20):

Yeah <laugh>. Heaven forbid. Yeah, no, I will go with the stochastic one.

EW (00:04:25):

Would you rather have dinner with Alan Turing or Claude Shannon?

AB (00:04:29):

Ooh, cool. Wow, I would love that so much.

EW (00:04:35):

Which one?

AB (00:04:36):

Yeah, so I guess the thing is that Alan Turing was a student and professor at Cambridge. I know a lot of the sort of people that he hung out with. I have a friend who played chess with him when they were kids. So although I would love to meet him, I feel like I probably pretty well understand what sort of person he was.

(00:04:55):

Claude Shannon, oh, absolutely incredible. Yeah-

EW (00:04:58):

Yeahh.

AB (00:05:01):

Yep, ten minutes with him would probably already double my knowledge of the real ground truth of information theory.

EW (00:05:09):

I would totally crash that dinner.

CW (00:05:12):

<laugh> If you were teaching an adult to program, would you start with Scratch, Excel or Python?

AB (00:05:16):

Hmm.

CW (00:05:16):

Or something else?

AB (00:05:19):

Yeah. So I have done this. I used to run a class where I taught humanities professors what they could do with programming languages, and I liked Scratch. The reason for that is that it is more interactive. You can use multimedia immediately, so you are not stuck just in text and number world.

(00:05:35):

Excel definitely is good, and I use Excel a lot. I can definitely help people do useful stuff in Excel. But for example, when I designed a programming language for artists, and then I spoke to a sculpture friend and said, "What would you like to do with this?" And she was like, "I use a computer when I have to do my accounts using Excel, but why would I ever want to use a computer for my actual job? Because that is fun." <laugh>

EW (00:06:02):

Have you ever held a weta?

AB (00:06:06):

Ooh! Worse than that. Yeah, I have had a weta- The horror of a New Zealand child is to have a weta, which is a sort of giant cricket with big poisonous spines on its back legs- I have had a weta inside of my rubber boot when I put my barefoot into it. That is super nasty.

(00:06:25):

But yes, I have also held a weta, because in my sixth form biology class at the end of high school, we did not dissect mice and lizards, but we did dissect wetas.

CW (00:06:36):

I did not know they were poisonous.

EW (00:06:37):

I did not either.

AB (00:06:38):

Actually, I might have been exaggerating a little bit. They give you a nasty scratch and it gets quite inflamed. Maybe it is just a little bit allergic, rather than poisonous. Yeah, we do not have poisonous animals in New Zealand. It is not like those Australian spiders.

EW (00:06:51):

<laugh> Everything in Australia is poisonous.

CW (00:06:52):

I read that their-

EW (00:06:54):

Even the mammals are venomous. <laugh>

AB (00:06:56):

The first time I spoke at an AI conference in Australia, I stayed with friends in Sydney. They picked me up from the airport and they said, "Just one thing, Alan. While you are staying in Sydney, do not put your finger into any holes in the ground." <laugh>

CW (00:07:10):

<laugh>

AB (00:07:10):

"Yeah, I was not anticipating that I would do that. But now that you mention it, I will definitely-"

CW (00:07:13):

"We really mean it, though."

AB (00:07:15):

<laugh> Exactly.

CW (00:07:18):

What is your favorite fictional robot?

AB (00:07:22):

I am sort of interested in the really extreme ones, that transcend the boundaries of what we might think it means to be human. I guess I liked Anne McCaffrey's, "The Ship Who Sang." What would it be like to have a spaceship as your body?

EW (00:07:35):

Ooh. Yeah. They had lots of adventures.

CW (00:07:37):

Ann Leckie wrote a book with that premise too, right?

AB (00:07:39):

Yeah, exactly. Yeah. Yeah. Ann Leckie for sure. Her trilogy starts with one of those minds that has been ejected out of a ship and then is unconscious in a desert planet or something. Yeah, that is a great book. I really enjoyed that.

(00:07:58):

It asks super interesting questions. I am doing a lot of research at the moment, just saying, "Now that we have got these things which produce language, but are not really intelligent in the way that we used to think of being human, what does the body really give us?" And that asks us a lot about what part of our intelligence is actually only there because of our body.

EW (00:08:18):

I think there is a lot of fiction that is relevant to that. I think even Bradbury kind of asked the question of, "Why would you make an android? Why would you make a robot look like a human?" There is so little that we can do that-

CW (00:08:31):

That is for us, not for the android.

EW (00:08:33):

A robot- I mean, a robot bartender, I think, was one of the places it came up. Why would you only give it two arms?

AB (00:08:39):

Yep. Yeah, absolutely. And I think this is, of course, one of the things that I riff on a little in the book, is where I claim that AI is a branch of literature, not a branch of science. Because so much of AI is about just imagining what it would be like to be a human, if a human was a kind of thing.

(00:08:55):

That puts us in that tradition of all those stories going back centuries and even millennia to, "What would it be like to have a statue that comes to life, and behaves like a person?" Or, "What would it be like if you made something out of clay, and then you put a scroll inside it and it came alive?" For me- For science fiction- All of those things are science fiction, and what they are about is imagining different ways of being human.

EW (00:09:18):

One of the things that you did not cover in your book, which I do not even think we have said the name of-

CW (00:09:23):

No, we were still in lightning round. So we had a transition out of lightning round.

EW (00:09:26):

Oh, okay. <laugh> We kind of did.

CW (00:09:29):

Yeah. <laugh>

EW (00:09:29):

So your book is- Wow, this is something I really should know. "Moral Codes." Could you talk about your book for, I do not know, 30 seconds to two minutes?

CW (00:09:44):

<laugh>

EW (00:09:44):

Sorry, that question did not come out well, but let us just go with it.

AB (00:09:50):

The title of the book is "Moral Codes: Designing Alternatives to AI." The "Designing Alternatives" part, I think, is what is going to be most welcome to Embedded listeners. Because what the book is all about is saying, "The world needs less AI, and better programming languages." The whole book is really arguing for that.

(00:10:11):

I talk a little about AI, and I talk about what it can do and what it cannot, just addressing this kind of public anxiety. But then I get back to the theme that I am really interested in, is to, "How can you make the world a better place, if you have got better programming languages?"

(00:10:24):

It turns out that that can deliver a lot of the things that AI people think are their biggest problems. So a shorthand way of saying this to people who know a little bit about computers, but also have some training in philosophy, would be to say, "Just imagine that you wanted to tell a computer what to do, in a way that you could be confident that it would do the thing that you have asked it to do. And also that if it behaved differently, you could say, 'Why did you behave the way you did?'"

(00:10:56):

Those are the things that have got technical terminology in the philosophy of AI. The first one they call the "alignment problem." The second one they call the "explainability problem."

(00:11:04):

But then I say to the philosophers, "Just imagine if we did have a way of telling a computer exactly what we wanted to do, and then ask why it did those things. If we designed a special language that would allow us to achieve those things, what kind of language would that be?"

(00:11:21):

And the philosophers go, "Oh wow. Yes, that is really profound. Yeah, definitely. I can see where you are coming from here. That is a really interesting philosophical question." And then I say, "Well, guess what? We have a language like that. It is called 'programming language.'"

EW (00:11:34):

Yeah. <laugh> Sort of.

AB (00:11:34):

We have been designing programming languages for 50, 60, 70 years. And for that long, there have been computer scientists who have been improving them to make sure that the computer does a better job of doing what you wanted it to do. And also so that you have a better job of being able to read it and understand why it does those things.

(00:11:56):

That is the fundamentals of programming language research, but that is a different branch of computer science to AI.

CW (00:12:04):

Just to level set here, how would you define "AI"? Because it has gotten a little muddled, at least in recent decades.

EW (00:12:12):

Something that is coming in ten years. <laugh>

CW (00:12:14):

No, that is nuclear fusion.

AB (00:12:18):

<laugh> Yeah, definitely. AI has been coming in ten years, as long as I have been in the field. This is a book that is written for the general public, because I think it is important that people who are not computer scientists have a better idea of what sort of things you can do with AI, and what sort of things would be better to do with programming languages.

(00:12:36):

But when talking to the general public, of course they do not know the difference between what those two things are. They do not know which kind of algorithms are compilers and which kind of algorithms are largely language models. But what they do know is that software is changing the world a lot. So to some extent, everything that is happening today they think of as being AI.

(00:12:59):

That is definitely true of the recently completed European AI Act, because when you look at the way that software systems are described, it is not really about AI at all. It is just about, "Here is what software ought to do."

(00:13:12):

I have even spoken to people who work in Brussels as members of the policy teams that were contributing to drafting that legislation. And I said, "Would it be right to say that you could just replace the word 'AI' with the word 'software' throughout this legislation, and it would really make no practical difference?"

(00:13:27):

And what I was told by the Brussels policy researchers, "Yeah, absolutely. That is definitely what we are wanting to achieve here. We are wanting to provide a better way of governing software. We just use the word "AI," because there is so much hype about that, and that tells people that we are working at the leading edge." And that has been-

EW (00:13:46):

No!

AB (00:13:46):

Yeah.

EW (00:13:46):

<laugh>

CW (00:13:49):

Well, do not worry. They are lawmakers. They do this sort of thing for a living.

AB (00:13:51):

So I can tell you, from my perspective- I said I have been working in AI since 1985. The algorithms that I was using back then, nobody calls those things "AI" today, like A* search or functional programming. Those were the sort of day-to-day tools. Nowadays, those are just useful algorithms, useful languages. We do not call them "AI" anymore.

(00:14:15):

In fact, I think that has been a pattern throughout my career, is that stuff that is AI one year, five years later is just working software. In fact, there used to be an old joke from my master's supervisor who had spent years at the MIT AI lab. He used to say, "If it works, it is not AI."

CW (00:14:36):

<laugh>

EW (00:14:36):

Yes, exactly. Yes.

CW (00:14:38):

Well, when we were coming up, expert systems were the only real AI that was out- "Real AI" that was out there, and they were just glorified databases and lexical parser kind of things. Right.

EW (00:14:49):

Hey! I did my research on that.

CW (00:14:49):

I know you did your research, and it was very cool. And nobody talks about it <laugh> anymore.

AB (00:14:54):

Yeah, I have built some cool expert systems. Absolutely. And of course, in ten years time, we will look back at what ChatGPT does and we will say, "Oh yeah, that is just a-" The phrase "LLM" is going to be right in there. Or, "That is just a transformer. We got better things than transformers." Or we will understand why crossmodal AI seems to be a thing of interest.

(00:15:15):

But a lot of what I am interested in here, is separating the value of the algorithms, which are great. I love a cool new algorithm. And then, what are you going to be able to do with that algorithm? And I would say throughout the long history of software engineering, the really super interesting things you can do with an algorithm, are seldom apparent right at the start.

(00:15:36):

Usually people give demos that are based on things they read in science fiction books, or maybe just what they wrote in their proposal or told their manager. But if it is a good one, over the next five or ten years, you think, "Oh wow, actually there is something super cool you can do with this, even though I did not know that when I first made it."

EW (00:15:51):

You separate the concept of AI into control systems and social imagination.

AB (00:15:59):

Yeah. The reason for this is that there are two causes of confusion, between what I call "the two kinds of AI." Somewhere else I called one kind "the object of kind of AI," and the other "the subject of kind of AI."

EW (00:16:13):

Hm! Okay.

AB (00:16:13):

So they get mixed together for two reasons. One is that they have both made good use of recent advances in machine learning algorithms. Especially deep neural networks or maybe Bayesian machine learning more generally.

(00:16:28):

And the other reason they get confused, is that it is in the interest of some companies to not have you think too hard about the differences between them, because one kind is generally pretty useful. What is now called "reinforcement learning" by trendy young researchers, is- When I did my undergraduate major in it, was called "control theory" or "closed loop control" or previously "cybernetics."

(00:16:53):

But basically you need good learning algorithms if you want to have a system that observes the world somehow, and then changes things. So of course that is the foundation of robotics and all industrial automation, and all kinds of super useful stuff that I have got in my own house. So that is objective, because it is just observing the world, making measurements and doing things. That is good engineering stuff.

(00:17:17):

There is the other kind of AI, which is the science fiction literary stuff of, "What would it be like to be a different kind of human? Can we make a computer that pretends to be a human?" I consider that to be a branch of literature, because it is about re-imagining what we are.

(00:17:35):

So companies that want to say, "Re-imagining what it is to be human, is a useful thing to do." Quite often do that by blurring the boundaries between stuff that humans do in their own minds and with each other, and stuff that robots do when they are usefully pottering along the roads, not driving over the curbs and things.

(00:18:01):

When it comes to autonomous vehicles, for example, that is a real challenge. Because some of the things we do in cars are human things. Some of the things that cars do are mechanical control systems. So you can do some parts of autonomy very well, and other parts really badly.

(00:18:15):

Personally, I prefer the word "cruise control," because that was an understood boundary of, "There are some decisions I want to make. There are some decisions I want my vehicle to make." As long as we do not confuse the "subject of" and "object of" parts, I am very happy for my cruise control to do automatic stuff that I do not want to be attending to all the time. But there are other things I really do not want my car to do for me.

CW (00:18:37):

That confusion exhibits itself in our industry. We see it quite often.

EW (00:18:41):

Ooh, yeah.

CW (00:18:41):

Where somebody is like, "Well, I have bought this new chip. It has got an NPU on it. I would like to make a model, that controls the temperature of this-" To make up an example, but we have seen similar things. "That controls the temperature of this thermistor." Or something like that. And it is like, "Well, we have a thing called the 'PID loop' that can do that in 15 lines of code. Why are you using a neural network to do that?" It is like, "Oh, because it-"

(00:19:07):

I think certain companies doing autonomous vehicles, which will remain nameless, have been pushing to put the entire self-driving stack into a neural network, and have it manage all the control systems. Which strikes me as completely insane, because you have well-established ways of controlling motors and actuators and sensors. The neural network parts should be deeper, smaller, a smaller kernel that is making "fusion decisions" about the world and stuff like that, if at all.

(00:19:38):

So I think the confusion is both- It seems like it is both in the public, but it is also at the engineer level where people are like, "Oh, I will just throw a model at this problem," when there are well established books full of- <laugh>

EW (00:19:54):

Actual solutions, with transparency and explainability?

CW (00:19:57):

Right. Working control systems. Yeah.

AB (00:20:00):

Yeah. Part of this is about our engineers. I will hold up my hand and say, "I am guilty of this too." We loved reading science fiction when we were kids. The three of us, we were all just geeking out about science fiction books before. Sure, engineers love reading science fiction, partly because it describes worlds where engineers get to be the boss and rule the world.

EW (00:20:21):

<laugh>

AB (00:20:22):

But quite often you are working on a relatively mundane piece of engineering, and you see something that looks like a thing that you read in a science fiction book. It is like, "Oh, this would be so cool to do this. I am actually having fun in my job!"

(00:20:37):

I think that is one of the challenges that we need to face, is the temptation to use inappropriate solutions when something simpler would work better. I think right back in my early career, something that the old gray haired engineers would always say, "There is a simple way of solving this problem. Why do you not do that?

CW (00:20:55):

Because it is not any fun! <laugh>

EW (00:20:56):

I do not know. I feel like some of these pushed upon neural network AI solutions to simple problems, it is just marketing.

CW (00:21:06):

Oh, there is a lot of that happening, because if people have chips to do it, and they want to sell them- Engineers are receptive to marketing.

AB (00:21:15):

There is a super pernicious kind of marketing that is going on right now, and that is the use of the term "artificial general intelligence." I really come down hard on us in the book, because the word "general" here is meant to imply, "Oh, no limitations at all. This can do anything." "AGI" is basically a synonym for "magic."

EW (00:21:33):

<laugh>

AB (00:21:33):

So I claimed in the book that whenever somebody says to you, "AGI," what they are actually saying is "magic." And when they say, "This problem pretty soon will be solvable, because we will have AGI," what you should hear is, "This problem pretty soon will be solvable, because we will have magic."

(00:21:49):

I thought that nobody would ever be dumb enough to just outright say, "By the way, I have invented magic."

CW (00:21:56):

<laugh>

AB (00:21:56):

The interview between British Prime Minister Rishi Sunak and Elon Musk last year when Elon literally said, "The thing about these AI technologies is that they are actually magic, so they can do anything." <laugh> I thought, "Oh my God!" <laugh> I had a little doubt about Elon's engineering credentials, but we get skeptical when people are making direct appeals to magic.

(00:22:23):

I try to explain in the book for a general audience, that there are some basic problems of reasoning about the arguments for AGI. That saying, "I have invented something that is beyond definition, and has got full generalities such that it is not constrained by what I told it to do." I try to argue that this is just a direct equivalence in terms of argumentation structure of saying, "Now that we have got magic, we do not need to be engineers anymore."

(00:22:49):

So I think a little bit of that maybe is underpinning the foolish students. I have seen this myself, as you say, "They need a PID controller. They implement it with a deep neural network." Then they show you that it is only slightly less stable than <laugh> the five cent chip they could have bought.

CW (00:23:07):

And it only requires 20 watts. <laugh>

AB (00:23:09):

Exactly. Yeah. And a GPU. <laugh>

EW (00:23:14):

One of the things that you definitely take like a big bat to a pinata is LLMs, and whether or not they represent a form of AGI. You used the word "pastiche," and I really liked that as an explanation. Could you talk a little bit about that?

AB (00:23:36):

Sure, sure. "Pastiche" is a great word, which I learned more about writing the book, because I talked to an Italian colleague who was able to explain to me where the word comes from. It is a cooking term from Italian. The Italian word is "pasticcio." "Pasticcio" means a meal where you just mix up your favorite ingredients in a way that does not really have a grand scheme. It is just you put a bunch of stuff in there that you really like.

(00:24:04):

I have tried this out a number of times on Italian people now, and I say, "What is the example of- The first thing that comes to mind when I say 'pasticcio'"? They say, "Oh yeah, like a lasagna." Lasagna is a pasticcio. It is not this grand cuisine thing. It is just a whole bunch of stuff that is nice. It has got pasta, it has got meat, it has got cheese. It is like, "Yeah, why would I not like all these things?"

(00:24:23):

So stirring stuff together just because the ingredients are things that you like, but with no overall grand scheme, is a term that has been used in art history for a very long time. Back way before we had any kind of printing technologies or ways of reproducing art. People in places like Pompeii, if they are decorating the walls of the house, they did not have wallpaper, but they liked to have nice pictures.

(00:24:47):

So they would get some guy to come around and just put a bunch of pictures on the walls of their house of people feasting, or if it is a brothel, people having sex or whatever. The guy that does this kind of work, he is just like your house painter, this is not a great artist who is going to be celebrated in history. He probably has been down to see the artists that the Pope pays who are the really good ones, and he sort of has an idea of all those things. Then he just puts some version of them on your walls.

(00:25:10):

So that is where the term "pastiche" comes from, in art history. Is just sort of a jobbing piece of work, where you mix up nice ingredients that you have seen somewhere else, but you do not really claim to make it original.

(00:25:22):

If you do a degree in art college or in music college or something, this is definitely what you are warned against. So nowadays, when you can just have a direct reproduction of the Mona Lisa if you want, you do not need someone to come and paint on your wall some poor imitation of the Mona Lisa. Nowadays we want artists, every artist, to do something that is original. So "pastiche" is a term that is used by art tutors to tell their students, "Do not do that. Do not just imitate other people's stuff. You should have an original idea."

(00:25:51):

The definition, if you look back at the textbooks on art theory, that are trying to define exactly what it is that you do not want artists to do, there is this nice definition actually for a couple of centuries ago of saying, "The thing about a pastiche is that it is not a copy, but it is not original either."

(00:26:11):

So I think that really gets to the nub of what it is that we find a little unsatisfying about LLMs, is that, yeah, it is not a copy because it is not exactly like any other piece of text. But every time you read it, you think, "Ooagh. This is really not very impressive. Everything here is just stuff I could have found on the internet." I think that makes clear what the limitations are.

(00:26:38):

In the book, I talk a little bit about the fact that this is one of the problems of morality that we are dealing with here. Because a lot of these engineering decisions, it is not that nobody's life is affected by this, people's lives are affected. So of course, one of the things that we are really concerned about with LLMs, is that there are people out there who are doing actual original artwork, but they are not being paid for it anymore, because their original work got stirred into ChatGPT or something.

(00:27:05):

If you ask ChatGPT, "Please, can you write a novel kind of like this?" Then as long as you do not use the novelist name, which triggers the guardrails and says, "Ooh, no, no, no, no, I am not going to infringe anyone's copyright." But as long as you just say, "This is the kind of novel that I would like," without saying who it is that you are ripping off, then it will very happily give you a whole bunch of stuff, just like that person's work.

(00:27:28):

That is why the Screenwriters' Guild and so on are really upset. Maybe studios will just plagiarize their stuff at one remove, by asking for things and not paying for them. There is a whole economics there of where the training data is coming from. Why the architecture has been designed in a way that includes no traceability at all.

(00:27:46):

I even argue, "Maybe it has no traceability, because that was just so convenient for the companies that invest in this. Even if there had been a technical solution of how you could make traceability from the training data to the output, even if that solution was available, it is not in a company's interest to pay for it or even to ask the question."

(00:28:07):

I would not be at all surprised to learn that some researcher inside of Google or inside of OpenAI did discover a way of pretty effective traceability to original source documents and was told, "We never want you to ever mention that again, <laugh> or do any work on it." Because why would they want that?

CW (00:28:28):

That is called evidence.

AB (00:28:30):

<laugh> Yeah, it is like neural networks are like a plagiarism machine. They are custom designed to rip off intellectual property, without traceability of the original sources.

EW (00:28:40):

And the guardrails are a lot like Star Wars guardrails.

CW (00:28:43):

<laugh> Yeah.

EW (00:28:44):

They do not really exist.

CW (00:28:47):

One of the things that bothers me- You touched on it with the larger issue of taking original work which has been trained on and then reproducing it. Because the flip side is a lot of working musicians and artists are not producing the Mona Lisa and getting $10 million or whatever. They are making ad jingles for $50 or $200 a couple times a week, or making magazine covers or small pieces of artwork that they are doing on commission.

(00:29:14):

That is the sort of thing that LLMs are getting kind of okay at, to make just kind of a junky image or a not very great song that is appropriate for an ad. The vast majority of working musicians and artists are working class people, doing small work that is being replaced, or the intent is from some companies to replace that kind of work.

(00:29:38):

I think that is going to be a major disadvantage for society, because those artists are paying the bills with scutwork, to work on actual passion pieces and things like that, that probably will not be funded.

AB (00:29:55):

Totally true. In the book, I really make a point that so much of what we call "AI" is in fact not a technology, it is the set of business models. It is a consequence of certain economic policy decisions.

(00:30:08):

One of the sources that I draw on there, is a great book by Rebecca Giblin and Cory Doctorow, "Chokepoint Capitalism." Where they talk about the way that the digital realm is configuring so many different forms of media, so that creatives can only sell their products to one company. Whether it is Spotify, or whether it is Ticketmaster, or whether it is Audible, or whether it is YouTube.

(00:30:31):

Once you get to that situation, that company can just drive the price down in what they pay. They can drive it down to practically zero, because creatives are going to create. Even if you do not pay them, they will create.

(00:30:46):

And once you have got a monopsony- Which is the opposite of a monopoly, where you do not have one seller but one buyer, and it drives prices down instead of driving prices up. That has the effect that there is serious danger that all creative professions will have to turn into hobbies, unless you are Taylor Swift.

(00:31:05):

That everybody else in the world will maybe do the work for the love of it. Or even worse than that, maybe they will have to pay to publish their stuff. Your royalties will be a thing of the past.

(00:31:17):

I stop for my possible academic readers and say, "You might think this sounds really crazy. But wait, this is actually exactly how professors have to work already. Like, for most prestigious scientific journals, you literally have to pay to put your work into those journals. They do not pay you."

EW (00:31:32):

I liked the section where you talked about the effect of photography on painting. Of course it puts portrait painters out of work, because anyone could get the figurative images. And then it went from you had to have specialized equipment, to us all having cameras in our pockets.

CW (00:31:49):

Some of them using AI. Sorry. <laugh>

EW (00:31:52):

That aside. But this reminds me, like Socrates did not think writing was a good idea, because it would externalize thoughts when you should have them internal.

CW (00:32:04):

The jury is out on that one.

EW (00:32:05):

<laugh>

AB (00:32:07):

Yeah, maybe. When I am being provocative, I tell people, "Maybe with LLMs, we have finally proven that Socrates was right after all."

EW (00:32:14):

<laugh>

AB (00:32:15):

Because we look at all these books and we think, "Wow, this is actually a lot of crapola." <laugh> It is talking to real people that is the interesting part.

(00:32:24):

No, I think you are absolutely right, that being modern humans for absolute centuries has [been] about reinterpreting ourselves in the light of new technical changes. Whether it is the printing press, or whether it is desktop publishing, which put a whole lot of typographers and graphic designers and illustrators out of work.

(00:32:45):

Because a lot of everyday scutwork, as you said before, I could just do that myself. I did not need to hire a typesetter to publish my church newsletter, because I could just do that on my desk. So yeah, throughout my lifetime, there have always been whole professions where the people who were doing that job, it becomes mechanized, and within a couple of years they say, "Oh, what was my real job, and what was I really wanting to do?"

(00:33:13):

I used to worry about this a lot in my early career as an industrial automation engineer. I would be going and putting automatic devices into a factory, and I would be chatting to the guys who worked in the factory, so that I could learn more about how to make the system more usable and more efficient.

(00:33:28):

I would get worried and I would say, "But I am sort of worried. Are you going to lose your job, because I have put this machine here?" The response that I got more often than anything was say, "This is a terrible job. Nobody should have to do this. So I will be super pleased. I think actually my employer respects me, so I think I will probably get a job somewhere else in the factory. But even if I do not, to be honest, I would rather be doing something else."

(00:33:55):

I think we have seen that happening over hundreds of years. Is that if people are exploited by their employers and they are driven into poverty, it is not the machine that is doing that. It is the decisions the employer is making about the economics of the company, or maybe it is decisions their government is making.

(00:34:09):

So I think adjusting to automation is super interesting, and it is something that engineers can be really engaged with. The downsides of that- We need to be clear that that downside is not a direct technical effect of a decision that engineers are making. That is an effect of the way that a company made use of that technology.

EW (00:34:30):

Totally changing subjects.

AB (00:34:32):

Yeah.

EW (00:34:33):

What about LLMs in coding?

AB (00:34:35):

Yeah, exactly.

CW (00:34:35):

<laugh>

EW (00:34:38):

<laugh> Not changing subjects.

CW (00:34:38):

No, I was going to go on another rant about art. So this is good.

AB (00:34:41):

Certainly need to get back to the fact that there are many, many people writing books about AI at the moment. But not so many who are saying that the alternative to AI is better programming languages. So let us make the transition to that.

(00:34:52):

This is a book for the general public. Some of what I do is just to give them a little bit better understanding of what programmers really do from day to day. Because I think that is helpful for everybody to understand more about where software comes from, and why it is difficult to make software.

EW (00:35:09):

And where we do not spend all of our time actually writing code. As much as that is what we talk about and what it looks like, that is actually usually a much smaller part of our job than people expect.

AB (00:35:20):

Indeed. Yeah. LLMs are great assistance for code writing. I explained that it is not so much different to predictive text. And in fact, the programming tools I use, have had great options for autocomplete and refactoring. Sometimes the people selling them call that "AI".

(00:35:45):

Programmers are very good at making their own job more efficient. So we always take whatever is the latest advance, and use them to make better IDEs and programming editors and so on. So that is nothing new.

(00:35:54):

And of course, transformer based large language models, they definitely help with allowing you to quickly generate syntax from pseudocode and all that kind of stuff. So that is great.

(00:36:06):

What I take care about in the book though, is to say that this is not the same thing as the singularity coming into being because it has programmed itself to become kind of intelligent beyond what we can imagine. Because that is an argument for magic, really.

(00:36:25):

Yes, lots of programmers every day use Copilot and other LLM based tools, but definitely it is not writing software by itself. I try to strike a balance of acknowledging that these tools are super useful, but also that because they are not magic, they are not going to do some of the things that are claimed.

(00:36:45):

There are a couple of interesting boundary conditions though. One of those is really trivial things, "Make me a super simple computer game," or, "Put some JavaScript in my webpage to make an animated bouncing ball following the mouse." It seems that you can do jobs like that pretty well with ChatGPT. Because they are so small and self-contained, and you do not need to know much about the context, to make it work the way you have said.

(00:37:14):

Practically speaking, of course, we know that ChatGPT- Its training data includes the whole of GitHub. So you can be pretty confident that someone, somewhere on GitHub, has made the program you want. You have just got this plagiarism avoidance thing here, that it has remixed their original code just enough that you can pretend that you generated that from scratch.

(00:37:36):

And if you are lucky, it is not using a patented algorithm. Although I can tell you that the companies that sell these tools are pretty nervous that it might be. <laugh>

(00:37:44):

So that is one kind of- That is an edge case where you can produce code straight out of the box. It is a little bit more effective than just an even better autocompleter, an even better refactoring tool.

(00:37:58):

The other thing that they can do super well, is producing plagiarized answers to student programming assignments.

EW (00:38:07):

<laugh>

AB (00:38:11):

Student programming assignments are the most context free things you can imagine, because the poor professor grading them does not want to have to stop and think. A student programming assignment has got to be some really dull out of the box algorithm, because otherwise it is going to be too hard to mark.

(00:38:25):

And of course, students are just relentless. They are always uploading <laugh> their class exercises to GitHub and Stack Overflow and stuff. It was already the case that any student who was determined not to learn how to code, could find the answers online.

(00:38:42):

And guess what? ChatGPT can do it just as well, because it has been trained with the same stuff. For me, that does not tell us anything about whether AI will be able to program itself in the future. But it tells us quite a lot about the nature of learning programming.

CW (00:38:58):

The number of times I have used ChatGPT to try to write a script, "Write me a Python script to do something incredibly boring, that I cannot be bothered to do," I have had to spend a lot of time fixing it, or correcting it, or thinking-

EW (00:39:15):

Figuring out what it is trying to do.

CW (00:39:16):

Figuring out what it thinks it is doing.

AB (00:39:16):

It is not what I told it to do.

CW (00:39:18):

It requires a level of skill that is pretty high, to interpret what it is saying and correct it.

EW (00:39:25):

And writing it would just be easier.

CW (00:39:26):

Well, sometimes that is true. Definitely. So I think the message that, "Oh, you can just use ChatGPT to program," I see that a lot. People are like, "Ah, just do this." It still requires a high level of skill to take that and turn it into something that is actually correct. And you can miss things too, even if you have a high level of skill. Bugs are hard to find sometimes, especially if you have not written the code.

AB (00:39:53):

Undoubtedly. LLMs are pretty useful everyday programming tools. The statistics suggests that a lot of working programmers are using them all the time.

EW (00:40:02):

Hm!

AB (00:40:02):

I think the Copilot integration in Visual Studio is pretty good. I have got friends who do empirical studies of professional programmers, just finding out they are generally useful in lots of ways. I do not cut a lot of code recently, but next time that I do a big project, I certainly expect that I will get another productivity gain.

(00:40:23):

But I think there is one serious danger here for people that are not professors building prototypes like me, but people who are actually producing safety critical code or stuff that you really want to rely on. The way that I explain this is to say, "Everybody knows what it is like to have a new programmer join your team, who writes just really lame code. It does not compile. It is formatted really badly. The identifier names are stupid."

CW (00:40:48):

Me in 1995!

AB (00:40:49):

Exactly, exactly. We are used to having people like this on the team. You sit down with them, you do code reviews, you beat it out of them, and after ten years, they are sort of moderately competent programmer.

CW (00:40:57):

<laugh>

AB (00:41:00):

Sometimes they are super smart. They just make a lot of dumb mistakes, and that is okay too, because you can fix the dumb mistakes. So basically it is all right if it looks bad, but it is really quite good. It is kind of all right if it looks bad and it is bad, because you can see it needs fixing.

(00:41:13):

The worst though is you get programs sometimes that just, they produce code that looks quite plausible It is beautifully formatted and stuff, but it has got some really terrible underlying logical flaw. Worst of all, if it is in some edge case that was not in the specification, and the person never stopped to think about it properly.

(00:41:30):

Well, that is exactly how LLMs code. It is the worst possible programmer in your team. Because you do not want code that looks completely plausible, but actually <laugh> has a subtle bug that you had never thought about. So, yeah. In proper large scale software engineering, this is not the kind of coder you want on your team.

EW (00:41:49):

We did an informal poll of our Slack group on who uses GitHub Copilot. I was surprised at how few people loved it. A lot of people had tried it, and some people used it intermittently, or used it until it became irritating and then stopped using it.

CW (00:42:09):

There were several people who used it regularly, but they knew its limitations too.

AB (00:42:14):

From my knowledge of the literature and empirical studies of software engineers. Your informal poll definitely aligns with what researchers are finding as well. That certainly is the case.

EW (00:42:25):

Well, we are pretty niche. I mean, embedded is not standard.

CW (00:42:29):

It is probably not well-trained into ChatGPT right now. <laugh>

EW (00:42:32):

<laugh> It is not as well-trained into ChatGPT.

CW (00:42:33):

"Give me an STM32 whatever HAL." Yeah. I mean it is out there. There is stuff in there, but it is worse.

AB (00:42:40):

Yeah, interesting. A lot of my research colleagues in the compiler groups and so on- There are really interesting intermediate languages. The last assembler that I did a lot of work in was 68000, <laugh> so we are talking more than 30 years ago. Yeah, I think people are certainly making virtual machine languages and intermediate languages that are designed as LLM targets. So I think we will get more interesting stuff.

CW (00:43:12):

What do you mean by that?

AB (00:43:14):

Oh, so what I mean is, at the moment, as you say, the kind of languages that embedded software engineers work with, there are not a lot of examples of those in the training data for your mainstream commercial large language models.

(00:43:28):

But if you designed a very low level language, so like maybe LLVM or something like this. And constructed its syntax in a way that you know it is going to get efficient weight distributions in a typical transformer architecture. At the moment, I do not think there has been a lot of customization of-

CW (00:43:58):

Interesting.

AB (00:43:58):

Yeah, I think they have just been relying on the fact that it looks kind of like text. But yeah, I am sure they have done custom tokenizations-

EW (00:44:07):

Hm!

AB (00:44:08):

But I do not think that you see PL semantics. There used to be guys, actually- I did know people who worked on use of attention architectures for program language synthesis, but that was in the days before BERT and the huge swing of enthusiasm towards very, very large natural language training.

CW (00:44:33):

So you are saying that we could design programming languages that LLMs would consume better?

AB (00:44:40):

Totally.

CW (00:44:40):

And therefore- Oh my God. Okay.

AB (00:44:43):

Yeah, I think so. I have got colleagues here in the computer science department who are the real experts, but if I was chatting to them and just brainstorming what we might do. I guess to get a big enough training set- LLVM we have got good compilers to and from, also definitely to C, and to various assemblers, and so on.

(00:45:09):

So that means we could actually synthesize quite a big training set, where we have got crossmodal data from specifications and comments. Yeah, so that would probably be- I imagine people are attempting this. It seems like that would be a good approach.

EW (00:45:22):

When I say ours is niche, it is not necessarily about it being C. It is more about trying to deal with one of a thousand different microprocessors, interfacing to one of a hundred thousand different peripherals. And doing it with or without DMA, at this certain speed, and blah, blah, blah.

CW (00:45:41):

<laugh>

EW (00:45:41):

I mean, that is what makes the job challenging and interesting, and hard and sometimes impossible.

CW (00:45:49):

And slow to develop. <laugh>

EW (00:45:51):

Yeah.

AB (00:45:52):

So that used to be my job-

EW (00:45:53):

<laugh>

AB (00:45:56):

Assembly programming and dealing with all those nasty little issues. Not only did I work at that level, but I used to design the CPU boards that ran the code that I was installing in factories. Yeah, right down to the hardware is a thing I used to do. But to be honest, my current job is a lot easier.

EW (00:46:16):

<laugh>

CW (00:46:16):

<laugh>

AB (00:46:16):

So all respect to your listeners on Embedded, because I totally understand that that is real work.

(00:46:23):

One thing that I do like to do when I am talking to proper programming language designers and compiler builders and so on, is to have them think a little bit more about whether you can give some power of programming to people who are not actually engineers, and who did not take computer science degrees or engineering classes.

EW (00:46:41):

End user programming?

AB (00:46:42):

Exactly. This is the field of end user programming. So it is giving people the ability to just write enough code to be able to automate the drudge work from their own lives. The absolute classic example of this is the spreadsheet, which is a programming language.

(00:46:58):

It is a very specialized domain specific programming language, and the source code is kind of weird because it is all hidden inside and you can only see it one line at a time. But despite all of that, it is super useful for people that would otherwise have spent a lot of time sitting down with a calculator, typing in numbers.

(00:47:14):

So giving that kind of spreadsheet power, but for people who are not accountants. We are getting a lot of variety of things that are probably Turing complete in some kind of way, can definitely automate stuff that would otherwise be real drudgery. A lot of these are the things that are called low-code/no-code languages. Whether it is wiring together data flow components, or specifying simple transformation rules.

(00:47:43):

You can give people a lot of computational power, without necessarily telling them that they are programming. And that is why the title of the book is "Moral Codes." So "Codes" is a reference not just to legal codes and things, but also to the power of giving people the ability to program. So codes can look like anything. They can even look like graphical user interfaces.

(00:48:10):

I realized about halfway through writing the book, that the title "Moral Codes" also made a nice little acronym. So the acronym or backronym is "More Open Representations Accessible to Learning with Control Over Digital Expression." And that is what we would all want our programming languages to be, and also what we would like our UIs to be.

(00:48:32):

We would like them to be representations that show us what the system is doing. And we would like that to be open. And we would like it to be learnable. And quite often we would like it to be- For many times we want to be creative, or we want to be exploratory, we want to express ourselves.

(00:48:46):

So quite a lot of the book actually talks about just experiences of being an engineer, that are sort of things that are fundamental to being human. Expressing yourself, and having control over what goes on around you. Not having to spend too much of your life in drudgery and repetitive stuff, that could easily have been automated.

EW (00:49:03):

I liked that section of the book. It reminded me a lot of expert user interfaces, versus novice user interfaces. Expert user interfaces, you think about Photoshop with all of the buttons and-

CW (00:49:16):

Toolbars.

EW (00:49:17):

Windows and toolbars and-

CW (00:49:19):

Shortcuts.

EW (00:49:19):

You can do everything you want, and you can do half of it from the key commands, if you are willing to learn those. But as a novice, walking up the Photoshop just makes me want to turn around and walk away. <laugh>

AB (00:49:34):

Yeah, it is easy to confuse the needs of what kind of usability you need for someone who is an expert that might be using the system every day, and what kind of usability you need for a person who is only going to interact with this thing once a year.

(00:49:47):

Like doing my tax return <laugh>. Maybe when I was younger, I could remember things that long. But I have to say, nowadays I come up to do my tax return, and every year it is like, "Oh, this is the first time I have ever seen this."

(00:50:00):

So I really want that to have the most basic tutorials and things. I only do it once a year, so I do not mind, as long as things are clearly explained. I do not mind if the buttons are very big with pictures. Because I do not want this to be any more painful than I like it to be.

(00:50:14):

But my accountant- Well actually, I do not make enough money to have an accountant. But if I have an accountant, they would be really pissed off if they had to use an accounting software that has got great big buttons, and explanations of everything to do. Because they use this thing every day, so they want to have all those power facilities right at their fingertips.

(00:50:35):

So yeah, there is a difference between a system like Photoshop, and a basic drawing application. But I think we see gradual evolution in many parts of computer user interfaces, where things that at one time were considered to be accessible only to programmers, turned out to be kind of useful for other people.

(00:50:52):

So the very first Paint programs did not have any notion of putting layers of an image over other layers. So if you put another paint stroke, it would destroy all the stuff that you had already done.

(00:51:04):

Whereas a good user of Photoshop knows you get your base layer, and maybe you put your photograph there, and then you put your masks over the top of that, and then if you have got some text, you put that in another floating layer. That makes it easier to rearrange things. It makes them easier to visualize what you have done. You can show and hide them.

(00:51:21):

You get all sorts of sophisticated capabilities from this pretty simple abstraction, like, "There is just one thing I would add here, is the idea that there are layers to your image." That is an example of the kind of advance that I look for in end user programming systems. Is some simple abstraction that has turned out to be super useful, and even essential to professionals. But it is not so hard to learn.

(00:51:47):

And if you just figured out a way to put it into the UI in a way that is intuitive and understandable, so the first time you use it, you can see how it works. You do not need to go on a training course. You say, "Oh, now that I see what I can do with this, okay, I can achieve a bunch more stuff." So I think a lot of media authoring tools do have those sort of abstractions built into them.

(00:52:08):

To some extent, a lot of the stuff that you do in graphical user interfaces, there is always more potential to think a little harder about what those mean. In terms of the semantics of the diagram. What the syntax is. Just thinking back to the fact, "Oh, yes, that is a programming language."

(00:52:23):

But what we really want to avoid, is taking that away from us. Because quite often the companies we buy software from, are not very interested in us controlling the experience better. What they would really like is that they have more opportunity to surveil us, and sell our data to their customers.

(00:52:43):

My friend Geoff Cox said, "Basically, you have got a choice. Program, or be programmed."

CW (00:52:54):

<laugh>

AB (00:52:54):

That is really where we are here.

CW (00:52:56):

That is right up there with, "If you are not paying, you are the product."

AB (00:52:59):

Absolutely. Program, or be programmed. Asking whether our user interfaces are things that show us things about the system state, and allow us to modify it with direct instructions. Or whether the system state is hidden, and we are sort of allowed to provide training data, but where it is very difficult to give an unambiguous instruction of you definitely want the system to behave differently.

(00:53:21):

In my classes where I teach user interface design, and I ask my students to think about, "Just how good is a smart speaker, like Amazon's Alexa or something? Would you like to do your programming in future, by just talking to Alexa?" They say, "Oh, that would be brilliant!"

EW (00:53:38):

No!

AB (00:53:40):

Well, exactly. You talk them through like, "So this would mean that the source code is invisible. And you have a one shot. You just have to speak it right the first time, and you cannot go back and edit it. How good a programming language is that?" It is like, "Ohh yeah."

(00:53:54):

There are many, many people who suggest that the graphical user interface will go away. That we will not have any representations. We will not have any diagrams. We will not have any words. We will just speak to the artificial general intelligence.

(00:54:08):

There is even a phrase for this. They call it- There is this research community calls that "natural user interfaces." Supposedly this is going to be the successor to the graphical user interface. The natural interface where you just speak, and you never have to look at anything.

(00:54:21):

Well, that is classic program or be programmed. That would be heaven for the surveillance capitalism companies, because they can sell you whatever they want, and you have got no remaining control at all.

CW (00:54:35):

Meanwhile, we have got typewriters attached to every computer still. That <laugh> does not seem to be in any danger of changing. Nobody has beaten that, in terms of input.

AB (00:54:47):

I had the privilege to work with David MacKay, who was one of the geniuses of the 21st century, really. Among his many incredible things, was to create the first machine learning driven predictive text to interface, a thing called "Dasher," which is still available online in some places.

(00:55:03):

He proposed that as an alternative to the keyboard. It was super impressive and it could predict text words ahead, at a time when we still had the T9 system for spelling out your words on your feature phone handset.

(00:55:19):

I worked on Dasher with him about 25 years ago. It was pretty clear at that point that language models would be able to produce pretty long streams of text, that looked just like the kind of text that I would have wanted to write anyway.

(00:55:34):

It turned out though that although the Dasher interface was very information efficient, it was like playing a fast video game. Because you could type text very, very fast, as long as you watched what was coming out of it, and controlled the mouse to steer towards the sentences you were wanting to write.

(00:55:53):

As it turned out, it was far more effective to integrate that with the keyboard. And actually a researcher who worked with David and then with me, and is now a professor in Cambridge, his PhD dissertation was to invent the thing that we now know of as the "Swipe keyboard." What he did was to integrate Bayesian language models with something that looked like a keyboard. Which meant that if you did not want to go too fast, instead of drawing those fancy shapes, which is pretty fast if you do it well, you could just go back to pressing the keys one at a time.

(00:56:24):

We have trained ourselves to be keyboard users. Not just QWERTY keyboards. But even musicians get to use the piano keyboard with all the black and white keys, which is pretty fast. Once you know how to play it, you can get those chords out very quickly. But it is also very constraining, because if you wanted to play a note between the keys, "Oh, sorry. You cannot do that on a piano." So we have got some trade offs there.

(00:56:50):

The keyboard is not fully optimal, but a lot of it has been optimized into muscle memory, so that we may not see it disappearing very soon.

CW (00:57:01):

<laugh>

AB (00:57:01):

Certainly it is pretty annoying to interact with people who can only type as fast as they speak.

CW (00:57:11):

Right.

EW (00:57:11):

<laugh>

AB (00:57:11):

Most practiced keyboard users can type interesting stuff quite a bit faster than they can speak. <laugh>

EW (00:57:18):

I can type faster than I can think sometimes, just check my emails.

CW (00:57:22):

Which is why waiting before sending is very important.

EW (00:57:28):

<laugh>

CW (00:57:28):

<laugh>

EW (00:57:28):

You mentioned predictive text and its helpfulness to you, even writing in your own style. How much AI, whatever that is, did you use in writing the book? Did you experiment with any of that?

AB (00:57:46):

Yeah, I really play some games with my readers. There are pieces where I ask them to guess how I wrote a particular sentence. I wrote the whole of the manuscript of the book actually, originally before the launch of ChatGPT. So I am a bit relieved that I did quite a good job of anticipating what was going to happen next.

EW (00:58:03):

Yeah.

AB (00:58:05):

Between the original delivery of the manuscript, and then delivering the final revisions the following summer, ChatGPT was released in that gap. I had to go back to the book and say, "Ooh, how much of this do I need to change, now that everybody in the world knows what this stuff is?" Because previously I had had to put a lot of effort into explaining what a large language model is, and why it was going to be interesting in the future.

(00:58:22):

So ChatGPT was not there when I wrote the bulk of it, but I reported some experiments that I did with GPT-2 and other earlier models.

(00:58:37):

Something that I reflect on a bit is the role of craft and embodied practice, which I guess I have alluded to a little bit when we were discussing keyboards just now. People who do a lot of coding, that stuff comes through your fingers and you do not necessarily- Just like you said Elecia, you do not necessarily think about- It comes out before you have even thought about it.

(00:58:57):

Definitely this happens when I play my musical instrument. I have been playing in orchestras for 40 years, and I definitely cannot describe to you the notes that I am playing. They go off the page into my eyes and into my fingers. There are no-brain bypasses. <laugh>

(00:59:16):

What was interesting to me as I was writing the book is those craft elements. I was definitely reflecting on the tools that I was using to write the book, and how they related to what I was saying.

(00:59:25):

So I used a couple of predictive text technologies routinely. One is that I use a MacBook Pro with a little smarts. The display bar above the keyboard, that in a lot of applications on the Macintosh, it will come out with a choice of words. Which occasionally I find it faster to grab that from the top of the keyboard, rather than keeping typing.

(00:59:51):

What I did far more of, was that I wrote quite a lot of the final draft of the book in Google Docs. That was not complete whole sentences. And then I had to say to myself, "Hmm. Well, I could complete the sentence that way. Is that what I want to do?"

(01:00:07):

These were experiences that were novel experiences a year ago. Nowadays, this is everybody's everyday life, is it not? So you guys, I am sure, are thinking about this all the time.

(01:00:15):

But in a sense it was what was already happening with our mobile phones. Because your predictive text keyboard is doing the notorious autocorrect and saying, "Yeah, well, you typed some stuff, but I think you really want this word." Like, "No! I do not want that word."

(01:00:28):

So it is interesting being inside this probabilistic universe, where every engineer using these things knows exactly why it is doing the predictions it wants. Is because it is predicting the lowest entropy thing that you might do next, which is also precisely the least original thing that you might do next.

EW (01:00:48):

Why it will never spell my name right! Or if it does, it will someday spell my name "Elecia" spelled like "electricity," because that is the only way it is ever spelled correctly.

AB (01:00:58):

You are definitely on the wrong side of the history here. Yeah, my daughter has got a name that is the most famous female Blackwell in the world, which I thought was an honor to the great pioneering American doctor, Elizabeth Blackwell. But it is not great for my daughter, when people want to Google her name. But it does mean that predictive text keyboards know exactly how to spell it.

(01:01:25):

It is just a bit sad that she has decided she wants to be called "Liz," because <laugh> now I say, "Ooh, no, no, no. That is too much entropy for me. You are going to have to have the same name as the famous doctor."

EW (01:01:38):

One of the things you mentioned in your book is about self-efficacy. Chris came across this article about how AI suggestions for radiology causes humans to perform worse, when it is supposed to augment them.

AB (01:01:54):

Yeah, really worrying. This is definitely something that I discuss in my graduate classes on human-centered AI. Because I think this is really dangerous, and we definitely need to think about designing for this. When we are thinking about the system boundary for our design project, drawing the boundary that includes the users and the organization as part of what the engineer needs to be concerned with, which is of course what all good engineers have always done.

(01:02:23):

Once you think about how this AI system is going to be used in practice, and you think what I am interested in is the joint performance of the human expert who is working with the AI. And then studying what happens when you make different changes to the impedance of the channel that they are communicating with.

(01:02:40):

A PhD student of mine did a fascinating piece of work exploring something that no one had ever looked at before. Which is the question of the timing between when you say something to the computer, and when it speaks back to you. So in conversation, this is a really important part of human conversation.

(01:02:59):

Conversation is very musical. Music researchers have shown the ways that turn taking between people who are having a nice conversation, they settle into a rhythm.

EW (01:03:08):

Hm-hmm.

AB (01:03:08):

They send sentence sentence. They say, "Hm-hmm," the way that you just <laugh> did. Conversation is a kind of music. So my student, who was advised by the Director of the Center for Music and Science in this work- She said, "I wonder if any of that is going to happen when you interact with computers."

(01:03:25):

So she created a simple conversational AI system, in which a human expert is having to respond to judgments by an AI, that may or may not be right. Just as in the radiology example that you have sent. And she just made subtle manipulations to the speed with which the response would come back.

(01:03:44):

In some of the conditions, it would mimic what humans do. So if you delayed a bit, it would delay a bit before it responded. Which is very different to the usual approach to user interface engineering, where the goal is usually just make it respond as fast as possible. Like, "There is no speed that is too fast. As fast as possible."

(01:04:02):

But of course in human conversation, that is not true. You do not want me to respond as fast as possible. In fact, we call that "jumping down your throat," if you respond too quickly in a conversation.

(01:04:12):

So she made different versions of her collaborative human expert, AI expert system, and just changed the conversation speed. She found something really disturbing, which is that if the computer responded in a more humanlike way, so that you really felt like you were having a nice back and forth, people were more likely to agree with incorrect judgments by the AI.

EW (01:04:39):

Hm. More trustworthy.

AB (01:04:40):

Exactly. So as it became more humanlike, they said, "Oh. Yeah, that must be right," and they accepted incorrect recommendations from the AI more often.

EW (01:04:50):

Well, sure, because the computer thought about it.

AB (01:04:56):

<laugh>

EW (01:04:56):

<laugh>

AB (01:04:56):

Yeah. Clever. We have published that work, and I do not think any of the peer reviewers ever suggested your explanation. But yeah, now I am going to have to mention this when I tell students. You may be right about that.

CW (01:05:08):

Well, that is funny, because it is like when I am playing chess. If I have got the level turned way up and the computer is sitting there spinning, I know I am in deep, deep trouble, right. Right? Because it is thinking hard. It is like, "Oh no, it is looking 600 moves ahead. I am doomed." It is a similar thing, right? <laugh>

AB (01:05:23):

And I do not know this for sure, but I know a lot of people suspect that when you interact with the large language models, that the response speed is slower than when the tokens are actually coming out of the server, and that they do that to make it more emulating human. I do not know if that is true or not, but certainly a lot of people believe it is.

EW (01:05:43):

Feels true.

AB (01:05:44):

It does.

EW (01:05:45):

Probably is. Who knows?

AB (01:05:46):

Yeah. Coming back to the question, designing systems in a way that undermines human experts, or even worse subconsciously encourages them to make incorrect judgements by not recognizing the limits of the algorithm you are using, I think that is super dangerous. I fear that it is going to be a big problem for us, including of course with people who routinely just use ChatGPT, and do not think too hard about what it is saying back to them.

EW (01:06:16):

Why are we trying to create human intelligences with an AGI, instead of something more interesting like octopus intelligence?

CW (01:06:23):

<laugh> Because we do not understand octopus intelligence.

EW (01:06:25):

<laugh>

CW (01:06:25):

Or human. But we really do not understand octopus.

EW (01:06:28):

But at least we know we do not understand it.

AB (01:06:31):

Yeah. Well that is definitely one of the problems with the phrase "AGI," is that you are talking about some kind of intelligence that supersedes every species on earth and transcends the notion of the body. So yeah, we are already in deep philosophical water there.

(01:06:44):

You could say in a sense that engineers of algorithmic systems, are designing new intelligences every day. We are just not very grandiose about it.

(01:06:54):

AI researchers have always loved to talk about the thermostat. Because a thermostat, it acts on the world, it has got sensors, it has got internal state. It meets all the definitions of classically what an AI is supposed to be. It is just that once we get used to it, we prefer not to call it AI because we have moved on to more interesting ideas. So yeah, octopus intelligence, not so sure that I need an octopus in my house, but thermostat intelligence, yeah, more useful.

EW (01:07:25):

One of the things I did like about your book was discussing what intelligence is, and how that word really is more of a cultural artifact than any measurable thing.

AB (01:07:39):

This is based on work by my colleague, the philosopher Stephen Cave, who just went back to look at how we started using the word "intelligence," in the way that is very familiar to us in the 21st century. He went back to projects that were trying to scientifically measure what made some people better than other people. It used to be called "anthropometrics."

EW (01:08:00):

Oorsh! Oorsh!

AB (01:08:00):

Through the 18th century, the 19th century, it became increasingly popular, because there were people feeling guilty about slavery. There were people feeling guilty about colonialism and racism.

EW (01:08:15):

It is not the way to solve that.

AB (01:08:16):

Well, it was the scientific age, and they said, "Well, it is obvious that these people are inferior to us. It is just that we never tested that scientifically."

CW (01:08:26):

Arrh!

EW (01:08:26):

It is just infuriating!

AB (01:08:30):

Because of the tragedies in Nazi Germany, this was the philosophy of Nazi Germany, and it used to be the science of eugenics. There was a Journal of Eugenics. It was a respectable scientific discipline, and it was the discipline of improving the human race by measuring which people are superior.

CW (01:08:50):

Using phrenology!

AB (01:08:51):

Well, I am looking at a phrenology head right now, because I have one in my office, just to remind people how stupid this is. But all of the stuff about intelligence testing was part of that project. Intelligence testing was fundamentally a racist project.

(01:09:05):

Stephen, in his history of the use of the word, shows this super convincingly. That the word "intelligence," as a thing that you could measure and was considered to be something scientific, rather than just a thing that philosophers talked about, was totally associated with eugenics, with anthropometrics, and motivated by racism.

(01:09:25):

So I actually say, "If intelligence was only ever about racism, then does that mean that artificial intelligence is artificial racism?" There are people who definitely claim that. Ruha Benjamin at Harvard writes amazing books just looking at the fundamentally racist design of a lot of stuff that is around us, and we do not think to ask about.

(01:09:49):

But I do not see you guys' faces. I am guessing you may be white, certain your surname is White, so there we go. But I am an old white guy, and I have a pretty easy life. It gives me the illusion, because I am an old white professor at Cambridge, that everything that comes out of my mouth must be- It does not matter where those words go. Obviously they are super intelligent. You can put them in a book. You can put them in a machine. It is nothing to do with my body. They would be intelligent from anybody.

(01:10:22):

But I know that I have got colleagues who are black, and I have got colleagues who are younger, and I work a lot with computer scientists on the African continent. Those people, they can say exactly the same words I do, and it does not sound intelligent to other people, because-

(01:10:38):

I have got the luxury of pretending that my words would be received as the same, no matter what body they came out of. But a person who has got a black body or who lives in the wrong place, they know very well that that is not true. Intelligence is not a disembodied thing. Intelligence is actually pretty much bodied. It is only the people who have got the right kind of bodies, that can pretend otherwise.

CW (01:11:00):

Okay, so-

EW (01:11:02):

Octopus bodies.

CW (01:11:02):

<laugh> To wrap this up, and at the risk of asking you to be my therapist-

EW (01:11:07):

<laugh>

CW (01:11:07):

I have a great deal of despair about all of the things happening in technology, but particularly where LLMs are going. Not the technology necessarily, but the way they are being used, abused and marketed.

(01:11:21):

I have a lot of friends who occasionally- They know I am a musician and will occasionally send me, "Hey, there is this new website that you can generate a song," and they generate a song and send it to me. And they say, "What do you think of all of this?"

EW (01:11:32):

And then he is infuriated for the rest of the day.

CW (01:11:33):

I try not to respond with the Miyazaki quote, "I find this to be an insult to life itself."

EW (01:11:37):

<laugh>

CW (01:11:37):

How should I be engaging as an experienced technologist and engineer with this stuff, instead of disengaging and going and sitting and just playing drums and not getting paid anymore?

AB (01:11:52):

Cool. Well, I am so pleased to hear that you are a drummer, because that is just what my band needs at the moment.

EW (01:11:55):

<laugh>

CW (01:11:55):

<laugh>

AB (01:11:55):

This is not a serious "I am solving the problems of the world." But this is like, "Here is a fun game that I am playing, because I am a professor in Cambridge and I can." With a few good friends who are also a little bit suspicious about traditional definitions of knowledge, as being sort of inside old white men's bodies.

CW (01:12:17):

Hm-hmm.

AB (01:12:17):

And so they talk about feminist theorists like Donna Haraway or Karen Barad. Those feminists were already pretty suspicious about all the text in the world that is written by old white guys, and that women sort of have to go along with believing it. And we are at an interesting point now where text has eaten itself.

CW (01:12:42):

<laugh>

AB (01:12:42):

We have created these things which literally produce bullshit, as I said in a blog entry last year. I have now got some hard scientific evidence for it, that I will be publishing soon. Until pretty recently, universities in this country, around the world, thought that as long as professors were producing texts, they were doing a good thing for the world.

(01:13:10):

But it seems clear now that there is too much text, and the producing more of it is not necessarily good, because we can now just do that with LLMs. So if there is somebody out there that needs bullshit, well that is good. They can just sit and chat to an LLM, and maybe I should do better things with my life.

(01:13:28):

So with my feminist theorist friends, I say, "What would the post text academy look like?" Because we are always going to have young people, and we are always going to have universities. Learning is part of being human. But if it is not about teaching them to write essays, and if we are not being evaluated on our research outputs and academic papers, what will we do with our time?

(01:13:48):

So we have been trying to invent a kind of knowledge, which is as far away from text and as far away from symbolization as we could possibly make it. So we formed a du metal band.

CW (01:14:00):

<laugh>

AB (01:14:00):

We play so loud that your body shakes, and no one is going to turn ChatGPT up that loud. We use a lot of distortion pedals. My friend, the singer, who is a Professor of Creativity, she trained in Mongolia in overtone singing. So the noises she makes are not regular notes. You cannot really write them down as a piece of music.

(01:14:22):

We are having a lot of fun. <laugh> We have played our first gig and it was too popular. So we have had to say, "We are only going to play underground from now on, because we do not want to do this in front of a big audience."

EW (01:14:34):

<laugh>

AB (01:14:34):

But definitely this is seriously challenging. What is the value of an LLM? How much time are we prepared to spend having a conversation with a thing that does not have a body?

EW (01:14:47):

I am fine with not having a body.

CW (01:14:48):

You reckon?

EW (01:14:50):

Just want it to have more than a pastiche of random stochastic parrotry.

AB (01:14:58):

Yeah. I feel like we have gone back to which science fiction books we liked the best.

EW (01:15:03):

Yes.

AB (01:15:03):

Imagining having a different body, is an interesting thing. But coming to terms with the fact that you are in a body- I guess I am a good many years older than you, Elecia. I am getting towards the point in my life where this body is going to give out some time.

(01:15:19):

I can either be Elon Musk can say, "When I am immortal, I am going to go and live on Mars." Or else you can say, "Actually, this is what it means to be human. To be human means to be born. It means to die. And it means to be youthful, to be old, to do all those other things."

(01:15:36):

So quite a lot of the book actually says, "Just think about what it means to be human, and is AI helping you with that?" Or maybe moral codes, computers that will do what you want and help you achieve what you want, maybe that is what being human is about.

EW (01:15:53):

Alan, thank you so much for this wide ranging conversation. It seems a little redundant, but it is tradition. Do you have any thoughts you would like to leave us with?

AB (01:16:03):

Yeah. I think the reason why we need less AI and better programming languages, is that having control over the digital realm allows us to become more properly human. A lot of the things being sold to us like AI, make us less human and give us worse lives.

EW (01:16:23):

Our guest has been Alan Blackwell, author of "Moral Codes: Designing Alternatives to AI," and Professor of Interdisciplinary Design in the Cambridge University Department of Computer Science and Technology.

CW (01:16:36):

Thanks, Alan. This was really great.

AB (01:16:38):

Thank you very much. It has been a pleasure.

EW (01:16:40):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listeners Slack group for their questions and answers to my polls. And of course, thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm, where there will be show notes.

(01:16:58):

And now quote to leave you with, from the very beginning of "Moral Codes." "There are two ways to win the Turing Test. The hard way is to build computers that are more and more intelligent, until we cannot tell them apart from humans. The easy way is to make humans more and more stupid, until we cannot tell them apart from computers. The purpose of this book is to help us avoid that second path."