514: Just Turn Off All the Computers

Transcript from 514: Just Turn Off All the Computers with Philip Koopman, Christopher White, and Elecia White.

EW (00:00:06):

Hello and welcome to Embedded. I am Elecia White, here with Christopher White. Our guest this week, a returning guest, is Philip Koopman. We are going to talk about his new book, and well, if not robotaxis, probably the singularity.

CW (00:00:25):

Great <laugh>. Hi Philip. Welcome back.

PK (00:00:28):

Hi. Thanks. It is great to be back.

EW (00:00:30):

Could you tell us about yourself as if we met, I do not know, at an Embedded World dinner.

PK (00:00:39):

Okay. Well, to try and keep it short, I have been doing embedded systems since around 1980, give or take, depending on how you want to start counting. I have been doing self-driving car safety since the mid nineties.

(00:00:52):

I have done, oh, I stopped counting at 200 design reviews of various embedded systems. Automotive. And natural gas pipelines. And big scary power supplies. And little tiny, not so scary, power supplies. And a little aviation. A little medical. I have seen all sorts of stuff.

(00:01:11):

So lots of embedded stuff, just plain old embedded. But I have also gotten into the self-driving car safety thing a lot more heavily of late. Because they have been sort of- A hundred billion dollars gets you a lot of progress. So I have been on top of that lately.

(00:01:27):

But I have always been an embedded person at heart. Now I am sort of pivoting out. I retired from Carnegie Mellon University a few months ago, where I had been teaching embedded systems. I am pivoting out to be a little bit broader, more generally embedded, and not just 24/7 self-driving cars.

EW (00:01:47):

And yet you did recently write a book?

PK (00:01:50):

Well, it is on embodied AI safety, which is more than self-driving cars. But I suspect this is the main topic of conversation today.

EW (00:01:57):

Yeah. We will talk more about it. Are you ready for lightning round?

PK (00:02:00):

Sure. Why not?

CW (00:02:03):

Would you use a robotaxis? And should it cost more or less than a human driven taxi?

PK (00:02:08):

I have been in them. It depends whose robotaxi.

CW (00:02:10):

<laugh>

PK (00:02:15):

I think ultimately it should cost whatever people are willing to pay. That is the market forces. I do not get to control that.

EW (00:02:24):

Do you think the singularity will happen in your lifetime?

PK (00:02:29):

I have no opinion on that. But remember the part where I said, "I just retired"?

CW (00:02:33):

<laugh>

PK (00:02:33):

<laugh> So time is running out on that one. So I am guessing, "No."

CW (00:02:39):

Do you think fully autonomous vehicles will happen in your lifetime? Without me defining what "fully autonomous" <laugh> means.

EW (00:02:45):

Or what "lifetime" really means.

PK (00:02:46):

I am going to say, "No." But the nuance here matters. Because even the Waymo robotaxis have remote support and assistance.

CW (00:02:53):

Mm-hmm.

EW (00:02:56):

What course at Carnegie Mellon University was you a favorite to teach?

PK (00:03:01):

Well, I was teaching embedded system engineering. It was, by the time- Every year it changes a little bit. By the time the dust had settled, it was half embedded software quality, and one quarter embedded system safety, and one quarter embedded system security. I was delighted to teach that course for many years.

CW (00:03:23):

Do you prefer AI or ML?

PK (00:03:27):

Depends what you mean by AI. <laugh>

CW (00:03:32):

<laugh>

PK (00:03:32):

ML is the current new hotness for AI. AI-

CW (00:03:36):

It is? It was the new hotness six years ago, and it is back. <laugh>

PK (00:03:40):

Well, and now it is back to transform. Oh, who knows? Right. I took an AI course, including completing a course project in AI, in 1981. I will just stop there.

CW (00:03:53):

Mm-hmm. What were those called? Expert systems.

PK (00:03:55):

Oh, that was before expert systems. Expert systems were the new hotness at that point.

CW (00:03:58):

Okay.

PK (00:03:58):

<laugh>

EW (00:04:01):

Expert systems were the new hotness when we went to school.

CW (00:04:03):

Right, right, right.

EW (00:04:05):

That is what I did my project.

CW (00:04:05):

Right.

PK (00:04:06):

I remember back then building a couple neurons in the neural net. And I remember building an expert system resolver that went way faster than anything else. As the guy I did it for said, "It asks questions really fast." <laugh>

EW (00:04:26):

Do you have a favorite fictional robot?

PK (00:04:29):

I guess my favorite is Huey, Dewey and Louie, although we have to mourn for poor Louie, because he did not make it. I guess you have to be at a certain age to get that one. That is "Silent Running."

CW (00:04:42):

Oh. Right, right. I have not seen that movie in a very long time. I should add it back to my list.

PK (00:04:46):

There you go. I have three Roombas. They are named "Huey, Dewey and Louie," for that reason. <laugh> Not everyone would get why they are named that.

CW (00:04:57):

Do you have a tip everyone should know?

PK (00:05:00):

Yes! I have two tips everyone should know. The first one is a callback, "Do not run with scissors."

EW (00:05:04):

Mm-hmm, mm-hmm, mm-hmm.

CW (00:05:04):

<laugh>

PK (00:05:04):

Some of your listeners will know why I said that, but I am not telling.

(00:05:11):

The other more serious one is, "Checklists are just amazing, for the right thing." So I travel a lot, and I have a checklist that I use every single trip. Most trips it saves me from forgetting something. It is, "Yeah, you can buy it at the far end, but we only have eight hours feet on the ground. Spending an hour to track down something, is a real pain in the neck."

EW (00:05:33):

It is funny, because there has been so much in medical and aviation about how checklists are critical. Whenever I use them I am like, "Yes, this is very useful," even though I feel kind of silly, because it is all stuff I should know to do.

CW (00:05:51):

Look, it is really hard to find a drum stool at 11:00 PM, when downbeat is 11:15.

PK (00:05:58):

<laugh>

EW (00:05:58):

<laugh>

PK (00:06:01):

I will give you my checklist story, because it is a worthwhile story. I used to drive submarines for a living. I have a combat medal from the Cold War, but I cannot tell you why. Okay. Part of on a submarine is it is-

CW (00:06:13):

<laugh> Giant squid. I will just assume it was a giant squid encounter.

PK (00:06:15):

No, nothing to do with giant squids. The thing about a submarine is it is a large tank. Basically there are people inside this large tank of air. The saying is, "You want to keep water out of the people tank." You really do not want that water coming in.

(00:06:29):

So before- Every time you leave port, before you dive the first time, two people, not one but two people, go around and check the position of valves and openings and stuff. There are literally hundreds of them. If even one of them is off, everyone could die.

(00:06:45):

So you have this checklist, this laminated thing. You go in with a grease pencil and you check off every single thing. One person does it, then a second person does it, because it would be basically impossible to keep track of all of them. And the new guy, the new guy gets the hardest one.

(00:07:02):

So I did the bow compartment I do not know how many times. That checklist, it was like four hours of, "This valve is open. This valve is closed." Just crazy. So I learned the value of checklists there.

(00:07:11):

But it is, for the right thing, it does not replace thinking. But the way I look at it is, if you are really, really smart, why would you want to waste brainpower in remembering to pack your underwear?

CW (00:07:22):

<laugh>

PK (00:07:22):

<laugh> Why should you worry when you are at the airport, "What did I forget?" The answer is, "I know I completed my checklist. I know that I am careful about completing my checklist. So I am simply going to choose not to worry about missing stuff." It is not perfect, but my "forgot something on a trip" went from 20 or 30%, down to one or 2%.

EW (00:07:46):

Having just had a funny conversation with someone who forgot charging cables, that is exactly the sort of thing most people forget.

PK (00:07:57):

Mm-hmm. I have another trick, that I have two sets of toiletries. And I have a backpack that is always packed with- My travel charger stays in the bag. It never comes out. That also helps. But you have to understand that, I do not know, I was past a hundred thousand miles this year, in summer. So if you travel that much, it is worth it.

CW (00:08:17):

<laugh> Do you not have to live on the airplane, to get that far?

PK (00:08:20):

Pre pandemic, when I was doing the networking to make UL 4600 which is a self-driving car safety standard happen, I figured out that I had been going about 25 miles per hour for the entire year, 24/7.

CW (00:08:35):

<laugh>

PK (00:08:35):

<laugh> I was like, "I do not think I ever want to do that again."

EW (00:08:41):

Are you traveling for speaking engagements, for training, for fun?

PK (00:08:47):

This is mostly speaking engagements. There are a bunch of things I had been saying no to for years, because first of all, they did not travel during the pandemic. But second of all, because there are just too many events.

(00:08:57):

Now that I am soft retired from the university, not- I still do plenty of consulting and all that other stuff, which is fine. But it frees up a little more time. I do not have to be back in Pittsburgh every week to teach. All these kind of things. A lot of these are things I have never been able to go to in the past, and there was pent-up demand.

(00:09:17):

It is not clear to me whether the next year will be as crazy or not. But probably not. Not as crazy as this year. But that is okay. Because I get to meet new people I have not seen before. Go to events I have not been to.

(00:09:28):

In September and October, I spent five weeks in the EU, on four separate trips. Because at some point, spending a week in a hotel room in Munich just was not what I wanted to do, so I flew home for the week.

EW (00:09:44):

And yet I could totally spend a week in Munich.

PK (00:09:47):

Well. Yeah, but I have lost count of how many times I have visited. So I have sort of seen everything <laugh> that I need to see. Yep.

EW (00:09:56):

Okay. Let us talk about your new book. When we spoke last, little over a year ago, we talked about "Understanding Checksums and CRCs," and how the noise in your environment should affect how you choose to protect your data. There was a lot more than that. I am sorry, but-

PK (00:10:18):

That is okay. Yeah, there was a CRC and checksum book, which had nothing to do with anything, other than that was a hobby project for multiple- From my whole time at CMU. I wrote it up in a book, and now I can move on. That was that book.

EW (00:10:31):

The previous one to that is "How Safe Is Safe Enough?: Measuring and Predicting Autonomous Vehicle Safety."

PK (00:10:37):

Correct.

EW (00:10:38):

Who was the audience for that one?

PK (00:10:40):

The audience for that one was regulators, and engineers that work at the self-driving car companies. There were some folks out of that group who read it.

(00:10:54):

There was a state representative in Washington who read it. There are some parts in there that say, "Skip this because it is really deep, if you do not want to know the deep." I presume she skipped and that is fine, but she got a lot out of it.

(00:11:10):

There are lots of folks who read it. Some news folks, reporters, technical reporters. But it did not hold back from being pretty technical.

(00:11:19):

In this new book- The new book is "Embodied AI Safety." It is kind of for a similar audience, but I tried a little harder to make it more accessible, especially the front half dozen chapters. To make it more accessible to more non-technical folks. Hopefully I succeeded.

(00:11:39):

But it is an audience- It is not- There is no math. One of the things I have done, even in the CRC book, is I do not use an equation editor. Because as soon as you do that, it dramatically limits who can access the book. So the math is, "Can you compute averages?" is about the math.

(00:11:58):

It is designed to help anyone who- The new book- The old book a little bit, but the new book especially is, anyone who can understand technical concepts, and who is motivated to really understand what safety means when there is AI involved. They should be able to read most of the book, if not all of it, and really get a lot out of it. That is the goal.

EW (00:12:22):

Would you say it is a popsci book?

PK (00:12:25):

No, I would not. I do not think so. It is not ever going to be sold on news stands. If there is a market for it, that is great, but I am not holding my breath on that. In particular, I have read some AI books lately that are more like a popsci book, whatever that means, but more general audience.

(00:12:45):

The "Embodied AI Safety" book requires technical sophistication, and it requires a lot of thinking. It is not a beach read. It is not a casual read. Where popsci often is more of a casual read.

(00:13:02):

I think anyone who is smart and wants to learn the technology, and can understand how things work in an engineering kind of way, has an engineering kind of mindset, even if they have not been trained as an engineer, should be able to read it. But it is not intended to have that broad appeal.

EW (00:13:21):

So you have gone from one that was more engineering, and then we have gone to one who is-

PK (00:13:25):

Ultra geeky. <laugh> This year's e-book is ultra geeky. I am not going to be bashful about that one. Yeah.

EW (00:13:33):

Oh. Yes. No, we are not going to argue with that one. And then this one is a broader audience, with insurers and journalists and-

PK (00:13:38):

Yeah, maybe 20% broader. Not way broader, but a bit broader. Because it turned out insurers were reading the first book too. So I basically found out who was reading the first book, and tried to tailor the book to hit all those folks. Make it a little more accessible to them.

EW (00:13:56):

Is your next one going to be popsci?

PK (00:13:58):

I have no idea. I am taking a little bit of a break from writing books for now. This is enough books in a short period of time for now.

EW (00:14:07):

It has been a lot.

PK (00:14:08):

And I have been traveling a lot, which makes it hard to write while you are traveling.

EW (00:14:12):

When I wrote my "Embedded Systems" book, there were two people who had worked for me, who were my designated audience members. Not that they read it, but I assumed that if they did not have that information, I needed to explain it.

PK (00:14:29):

Mm-hmm.

EW (00:14:30):

And if they did already have that explanation, like they knew what an average was, I did not have to explain it.

PK (00:14:36):

Yeah.

EW (00:14:36):

Did you have audience members in mind, that you used to define what you needed to explain and what you did not?

PK (00:14:44):

I did, and it is going to be a surprise, because I am going to name the person and she does not know it. There is a reporter, Junko Yoshida. Have you ever run across her?

EW (00:14:53):

I have not.

(00:14:55):

I met her a long time ago. She started getting involved in self-driving car safety, back in the pre self-driving car era in Toyota unintended acceleration. She was an editor at EE Times, which you probably remember back when that was a trade rag. Right? She was at EE Times, and she was covering the at Toyota unintended acceleration. And I met her as a result of that.

(00:15:23):

She and I have stayed in touch over the years. She has been covering all sorts of technology things. Chips and AI machine learning. I have had a lot of discussions with her about this technology. One of the things that occurred to me to do was I should ask her what she thinks she needs to know, what she needs to understand.

(00:15:46):

So I pitched the idea of the book to her and she said, "I want to read that book." And I said, "Aha! There is my audience." Because she is not a person working at a robotaxi company. I need to have broader impact than that. But her skillset in her ability to understand things, is pretty representative of the technically sophisticated, but not trained as an engineer, set of folks.

(00:16:12):

I have also put together in my mind, a composite of regulators and legislators and policy folks and other journalists I have talked to. So it was written to make sure it was accessible to them.

(00:16:23):

That having been said, engineers are going to get a lot out of it too, because I also teach. The other composite I had in my mind was the students who take my class. What would they find accessible, would they find worthwhile? So that was sort of the composite audience I had in mind.

(00:16:42):

Having read most of it- Let us be realistic about my schedule lately. A lot of it I nodded through, "Yes, yes, yes, yes, yes." But it was information that I did not have a previous place I could point to and say, "Look. Expert in the field says to do this. Let us do this." So I can steal your credibility as my own.

PK (00:17:09):

Well, for someone like you, that is no surprise. And that is great. Part of the challenge is boiling things down to be a simple, concise explanation, that is close enough to not be misleading.

EW (00:17:24):

Oh, yes.

PK (00:17:24):

All right? That is always really hard. The more experienced you are, the better you get at that. So part of this was an exercise. There are four chapters that go through safety and security and machine learning and human factors.

(00:17:37):

Then there is sort of like a little- I did not make it a primary chapter, but there is a mini chapter on how tort negligence works. It is important to have those references to point to, if you already know them. But what I have also found out is a very large fraction of the folks working in this field are missing one of those pieces, especially the tort negligence.

(00:18:01):

Your experience of, "Hey. We are going to- Yeah, yeah. Nod, nod, nod. Great reference to look to," is great. But there are other folks who will nod, nod, nod and say, "I had no idea." So some of it is just a self audit to see where the holes are, and where the gaps are, because that is important knowing.

(00:18:21):

I have a handful of favorite quotes and some of them are pretty eclectic, but one is "Man has got to know his limitations." Clint Eastwood. But it is important. You have to know your limitations. Part of the purpose of this book is for folks who are really doing it for a living, to make sure they do not have any blind spots.

EW (00:18:42):

That was super useful. Some of the terms that I had picked up in engineering meetings-

PK (00:18:50):

Mm-hmm.

EW (00:18:50):

But had not quite followed all the way through. Well, okay, let us start with hazard assessment and risk mitigation. This was a process I learned from FAA guidelines.

PK (00:19:03):

Yeah. HARA. Hazard and risk analysis. That is actually an automotive specific term, but the idea is very general. I happen to use the automotive term, but it is certainly a very general idea.

EW (00:19:15):

It is a matter of trying to figure out everything that can go wrong, and then trying to figure out how to make it not go wrong.

CW (00:19:22):

Or, how to deal with it if it does go wrong, in a safe manner.

PK (00:19:26):

It is identify all the things that can go wrong. Decide how likely it is. Decide how big a loss it creates. And then determine the risk based on some combination of those factors.

(00:19:40):

Then after that- What comes after hazard risk analysis is, well, you have to do risk mitigation. Typically, the higher the risk, the more engineering rigor and effort you put into reducing the risk to something acceptable.

EW (00:19:55):

This is how we end up with dual core processors, that are supposed to run the same code and get the same answer.

PK (00:20:02):

If there is a hazard of a single event upset or some sort of defect, that will affect one core but not the other, you run two cores. And if they had mismatch you know something bad happened, for exactly.

(00:20:12):

But it is not that safety means having two computers. Rather, safety is, "Well, we have to identify the hazards." For something life critical, you have to worry about a runtime fault. So you end up putting in two computers and comparing, as the mitigation technique.

EW (00:20:31):

When we start hazard assessments, my teams have almost always started- Actually you listed them as well. When a bolt breaks, what happens when a bolt breaks? It is like, "Okay. This a failure we understand." Even when a semiconductor fails, or the code spontaneously turns on all of the motors at full power.

(00:20:58):

All of those things I have definitely said in the meetings, and tried to figure out everything that could go wrong with my software. And how to make it not happen, or how to let it happen in a way that is safe.

PK (00:21:15):

Which means you were doing safety engineering.

EW (00:21:18):

Yes.

PK (00:21:18):

So one of the takeaways from the book is that people think that safety means code correctness, and that is not true.

EW (00:21:25):

Code is never correct.

PK (00:21:27):

Well, if you have incorrect code or crazily bad code, it is hard to make it safe. But you could have absolutely defect free code, and have a set of requirements that are outright dangerous.

CW (00:21:40):

Mm-hmm.

PK (00:21:42):

I run into this all the time. Part of the message is, having perfect code does not make you safe. Because getting rid of all the bugs, does not mean you thought of all the hazards and how to mitigate them. Those are two different things.

EW (00:21:56):

Yes. Yes. Excuse me while I write this down. I have to go talk to a client and those are the words I need to use.

PK (00:22:05):

Well, that is the point of this book. The point of this book is to give you the words. For someone sophisticated like you, the point of the book is to give you the words. That is right.

CW (00:22:13):

Hard to think of an example of that, even though I believe it, and I know it is true. Coming up with a blank.

EW (00:22:18):

Oh! Well, if your requirements are to make something dangerous.

CW (00:22:20):

Well, sure. Works as designed. It blew up the world <laugh>. Yeah.

EW (00:22:28):

The one wheel scooters.

CW (00:22:31):

Or even the two wheeled one. Yes.

EW (00:22:33):

But the one wheels, they all got recalled at some point.

CW (00:22:35):

Yes. Yes.

PK (00:22:35):

Yeah.

EW (00:22:36):

It is because there is no way to make that safe.

CW (00:22:42):

Clearly, nobody ever read B.C. comics before making that product.

PK (00:22:45):

<laugh>

EW (00:22:47):

So your software could be perfect, but nobody put in the, "Wow, you really cannot do that," notice, when they thought about the risk.

CW (00:22:56):

"This product should not exist," is the ultimate limit of that.

EW (00:22:58):

<laugh> Yes.

PK (00:23:00):

Less esoterically, there are the two wheel scooters, that one of them had a problem that the brakes would come on and throw riders. It was something- I do not remember the details, so I am going to hypothesize something. That if there is some sort of defect in the payment system, and it decides you have not paid and it jams on the brake.

CW (00:23:21):

Oh, one of the rental scooters.

PK (00:23:24):

Rental scooters. It does not bother to check whether you are moving when it jams on the brake. Right? It is, "Oops." That was hurting people.

(00:23:31):

I do not remember the- This happened more than once. I do not remember that any of the recalls were precise of that mechanism. But it was definitely the thing would slam to a stop while you are moving, for a reason that was not reasonable. That is-

EW (00:23:44):

As opposed to our car, which occasionally-

CW (00:23:46):

Our car which does that once a month.

EW (00:23:48):

Slams to a stop.

PK (00:23:48):

You could have a requirement to stop it when it has not been paid. And you have requirement to stop it when the driver says to stop.

(00:23:54):

But if you are missing the requirement that you are not allowed to stop at speed more than so many Gs, if you are just missing that requirement, you are not going to test for it. You are not going to consider how it could happen. You are not going to be safe, even though all the other requirements are perfectly implemented.

CW (00:24:09):

Mm-hmm.

EW (00:24:13):

You could potentially not make this a software problem. You could make it so that you cannot put the brakes on hard enough to stop at speed.

PK (00:24:22):

That is the thing. The mitigation does not have to be software.

EW (00:24:24):

Right.

PK (00:24:26):

One of the important things is there is no actual thing as "software safety." That is not a thing. It is computer-based system safety. But people call it "software safety," because that is what they are used to hearing.

(00:24:36):

But in computer-based system safety- By the way, that computer is embedded in something, hence embedded system. And if the mitigation is the brakes can only exert a certain G-force and the rider will not get thrown, then you do not have to worry about any of that.

CW (00:24:52):

Yeah. Software is perfectly safe, as long as it is not-

EW (00:24:54):

Connected to anything?

CW (00:24:54):

Put into a system. <laugh>

PK (00:24:55):

As long as it does not have any actuators, for the most part. Or, where it is not trying to manipulate you into doing something bad. The whole AI thing is a whole 'nother level of this.

EW (00:25:07):

One of the phrases that I did not have, that I appreciated, was "common cause failures." I think I heard it at some point, but did not click in. Could you describe what those are?

PK (00:25:21):

Sure. A lot of safety has to do with no single point failures. I am going to get to common cause in a second. But no single point failure. If there is a particular component, that if it stops working, then someone dies. This is clearly a bad thing. Anything life critical, one of the bedrock criteria is there can be no single point failures. There have been loss events due to single point failures.

(00:25:44):

But then in the aviation industry in particular, they started finding big dangerous failures, that were not a single thing failed. It is that a bunch of things failed together, and there was some sort of causal mechanism.

(00:26:00):

One of the big examples is there was a DC-10 huge airplane crash. What they had found was they had three hydraulic lines to control the aircraft flight surfaces, but all three hydraulic lines ran together. And there was a cargo hatch that was not properly secured and it opened causing explosive decompression, which collapsed the floor.

(00:26:23):

The collapsing floor severed all three hydraulic lines together. Oops. There was a couple plane crashes. One of them, the pilot miraculously did okay. The other one was not so good. The common cause was this one floor collapsing would compromise three otherwise independent systems.

(00:26:43):

Another one is if you have a cable harness and you have two independent signals, two independent wires, and they both go to the same connector and that connector detaches, you have lost both signals. Even though on paper they look independent, they go in through the same connector. And that connector is a common cause failure.

EW (00:27:02):

This made me re-evaluate a bunch of things I have been working on. Because I do not think about that. I think about, "This signal happened," not, "This cable got unplugged."

PK (00:27:17):

Part of this is we are saying software is not safe all on its own. Part of embedded systems is not embedded software. It is embedded systems for a reason. If you want to do embedded system safety, you have to have some basic knowledge about mechanical, about electrical.

(00:27:32):

Back when I went through school- Undergrad, barefoot in the snow, uphill both ways, all that stuff. <laugh> They made me take thermodynamics and all that other stuff. There is something to be said for that. Because I got pretty well grounded on things that are not just computer software, with my engineering degree. And embedded systems, this stuff starts mattering.

CW (00:27:56):

Yeah. That is why there are distinctions often, at least in larger companies, between firmware engineer where you are working on the software that runs on the device. And systems engineers who have to have an overview of all of the parts, including mechanical, electrical, and software.

PK (00:28:12):

You do not have to go to school to get that degree to pick it up.

CW (00:28:14):

Right.

PK (00:28:14):

But the point is, you have to be aware that it is not just a bunch of software running on a magic box. There is real stuff going on in there.

(00:28:23):

There is another one that happened to me from a while back. A bunch of, I think it was, Dell desktop computers had a bad batch of capacitors. These were electrolytic capacitors that would at some point age out and explode, and plaster themselves on the other side of the box.

(00:28:40):

I had several of my computers- So I had students, I was running a research group. Several computers all went within a one month period, that they all exploded their capacitors. The common cause was bad batch of capacitors. Fortunately, that was not life critical.

(00:28:57):

But if those capacitors were in a life critical system, you said, "Well, we have two computers, and so if one fails, it is no problem. We have another." What if the exploding capacitor pastes itself and shorts out something on the other computer, because they share an enclosure. Or what if both of them, during an extended admission, suffered the same, that both boards failed for that reason?

CW (00:29:19):

It can be hard. Because I remember the first time I encountered something like this, was at a medical device company. When I came in, they had a design for an emergency stop, which was- This is a laser, it is shooting at somebody's skin. If something goes horribly wrong, there is a big red button on the front and you press it.

PK (00:29:35):

Yeah. Red buttons are great. Red buttons are important.

CW (00:29:37):

Red buttons are great. But when I came there, the architecture was that the red button was a software button, that went to an interrupt. So it was a software's job to shut down, when somebody hit that.

PK (00:29:48):

Yeah. I am not buying it.

CW (00:29:49):

Obviously that went wrong, because the software gets busy or misses it or something. There were a lot of problems. I spent- I think it was me and the electrical engineer, spent six months saying, "Connect this to mains power and stop it." We did get that in the next revision.

PK (00:30:06):

Or put an ASIL-rated relay or some- Do something.

CW (00:30:10):

Obvious thing to do. But it took a lot to convince the people who were running the company, that was the way to go, after all this work had been put into the- There are people involved in these situations too, which makes it even more difficult sometimes.

PK (00:30:25):

Well, you asked if I would ride a robotaxi. I am not going to shame and name the company, but I had a ride with a robotaxi, and the safety argument had to do with who else was- The other person from the company was too valuable to let them die. So I figured it was probably safe enough or he would not go in. <laugh>

EW (00:30:41):

<laugh>

PK (00:30:45):

There was a safety driver. There was a backup driver. It was fine. It was a short drive. It was fine. But I am also a scuba driver. So even if it is higher than typical risk for your normal work day, if your exposure is low, yeah, it is fine. Live a little. That is okay.

(00:31:00):

But then afterwards I took a look at their hardware architecture. Their big red button went into the autonomy computer as a parallel input pin, GPIO pin. I am like, "Guys. No, you cannot do that. And I am sorry I rode in your vehicle. If I had known, I would not have ridden it."

(00:31:18):

So earlier when I said, "It depends on whose vehicle," yeah, seriously, it depends. I am told they fixed it. <laugh>

CW (00:31:25):

<laugh>

EW (00:31:30):

And yet, you still are not getting in their cars. Hmm.

(00:31:35):

One of the big things about your book is the AI.

PK (00:31:40):

Yes.

EW (00:31:44):

I do not know how to phrase this nicely. Do you hate AI?

PK (00:31:48):

No. I do not hate AI. AI is great for some stuff. I have a healthy respect for how things can go wrong. I am a safety guy, so I am always busy thinking about how things go wrong. I always have emergency backups. My wife used to give me the hardest time about, "Why do you have this long checklist? And bah, dah, dah,dah."

(00:32:11):

Then we are on a trip- Over several trips that we had taken together, she would say, "Oh, I forgot this. Do you have something?" and I would just hand it to her. She said, "Okay, I promise to stop giving you a hard time about your checklist." <laugh> So instead of at 2:00 AM local time running out to get whatever, it is like, "I just have it. It is right here."

(00:32:28):

So I think about how things could go wrong. If you are a safety person, that is a mindset you have to have. But that does not mean I live in fear every moment of my life. And it does not mean I am there to say, "No." I am there- I did not say, "Do not take risks." It is that you need to mitigate the risks appropriately.

(00:32:45):

So the issue with AI is that people are using it in ways where the risks are almost inevitably going to cause problems for them. They think about how cool it is, and they do not properly respect the risks. In particular, they do not respect how people react to risks.

(00:33:06):

So one of the chapters is about human factors, which for embedded systems, it was- I have been teaching for years and years. About 20 years, I have been teaching a module on human factors in courses where I could fit it in. So it is not like it is a new thing.

(00:33:20):

But it was not as big a deal as it is with AI, because people are uniquely suited to being conned by AI into not paying attention. It is just terrible. People are terrible at supervising automation. So they are building these systems where the plan is, when the machine makes a mistake, the person is going to be blamed if they do not fix it.

(00:33:45):

We have known for decades and decades and decades- Since before I was born even, we have known that people are terrible at that. So you are just setting them up for failure. So the part of AI I am unhappy with, is it is being deployed in ways that are guaranteed to set people up for failure. That is a problem. If you deploy in another way, that is fine.

CW (00:34:08):

We had a long show about this a while ago. Or at least we talked about this topic. Do you think- One of the things that came up was something you just said, which was that it is cool, which is already a red flag. When something is cool-

PK (00:34:23):

Something is cool. Well, if it is cool and innocuous, fine, right?

CW (00:34:26):

Yes. But it is an addictive coolness. Do you think that because AI, as we currently talk about AI, which embodies a lot of LLM stuff-

EW (00:34:36):

He has got so many air quotes going.

PK (00:34:37):

<laugh>

CW (00:34:39):

Do you think it is made worse by the conversational nature, the pretending, the anthropomorphized kind of system that it is now? Versus something that is like a model running in the background, that is detecting a squirrel or something, and says, "This box is around a squirrel," and you can make a decision.

(00:34:59):

Do you think that has made things worse in terms of not noticing the risk, because, "Oh. I am just talking to this thing which is friendly, and it is saying all sorts of friendly things."

EW (00:35:13):

You guys need to define AIs.

PK (00:35:15):

Yeah. Well, the LLM chatbot thing is certainly proving to be more problematic. But let us back up a little. You have the regular machine learning, deep neural networks, whatever. It is a classifier. "That is a person. That is a dog. That is a pig. It is a loaf of bread. Whatever."

EW (00:35:32):

Squirrel.

PK (00:35:32):

Squirrel. Yeah. Those things- The problem you have with people supervising them, is that if it is right a thousand times in a row, it is really hard to stay paying attention.

(00:35:42):

I am going to use car examples. The book is not all about cars, but the robotaxi experience has given us concrete illustrations of things to watch out for. So anything that is a car example, generalize to what you are doing.

(00:35:56):

But if you have been through a thousand red lights and a thousand times your car stopped for the red light, you are not ready for number 1,001 when it does not. You are just not ready. Your reaction time is going to be way longer. You are going to blow through the red light.

CW (00:36:08):

Things that work 99% of the time, are worse than things that work 50% of the time. <laugh>

PK (00:36:13):

Yeah. Well, if it is trying to kill you every mile, you are going to pay attention. You will have a different issue. You will be imperfect in response and no one is going to get by you. But that is a little different than being lulled in complacency.

(00:36:23):

If it is medical images, if the system is perfectly accurate for a thousand images, it is really hard to pay attention from it now at 1,001. Now, if you have trained professionals, and you have a thoughtful system and backups- Airline pilots, physicians, folks like that, I am less worried about.

(00:36:44):

But average folks, how do you expect them to perform in that kind of environment, where the stakes are literally life or death, if you get it wrong? It is asking them to be superhuman, and they cannot do it.

EW (00:36:56):

I think that we should make them play a driving game.

CW (00:37:01):

Make the people, or the robots?

EW (00:37:01):

People.

PK (00:37:02):

Do they actually die if they do not pay attention? Or do they kill a pedestrian?

EW (00:37:07):

No, no.

CW (00:37:08):

Maybe electric shock.

EW (00:37:09):

They have to drive in the driving game, and then occasionally the AI makes them drive the real car. But they do not know, because they are still playing the driving game.

CW (00:37:18):

<laugh>

EW (00:37:18):

<laugh>

PK (00:37:19):

Well. So here is the problem. You can motivate them all you want.

CW (00:37:25):

A plot of "The Matrix"?

PK (00:37:26):

I do not think I would even want to go there. You can motivate them all you want. You can shame them if they make a mistake. But you are asking them to be superhuman. You just cannot change human nature.

(00:37:35):

It is unreasonable to expect, "Well, we told you to pay attention." What? That does not change human nature. You do not ask people to do things they cannot do.

CW (00:37:43):

No. But it is a great excuse for companies. <laugh>

PK (00:37:46):

There is a name for this.

EW (00:37:46):

I think you use the term "moral crumple zone"?

PK (00:37:48):

Exactly.

CW (00:37:48):

<laugh>

PK (00:37:48):

So you know there is a crumple zone in the front of a car. The idea of the crumple zone is to absorb the energy, to protect the passengers inside. Okay. That is mechanical crumple zone.

(00:38:00):

The moral crumple zone is you know your computer is going to make a mistake, and your plan is to use the person who is supervising it. If they are a driver, if they are a technician overseeing computer operation, whatever, there is some person handily available.

(00:38:16):

Their role is to be a one-time blame absorbent device, that is disposable. They absorb the blame, they crumple up, and then the manufacturer of this defective device does not get blamed. That is it. That is the moral crumple zone.

CW (00:38:32):

AI sin eater.

PK (00:38:37):

But you asked about ChatGPT, and its like.

CW (00:38:40):

Yeah, yeah.

PK (00:38:41):

Right. So to go there. In those cases, it is not that it sometimes makes mistakes.

CW (00:38:48):

Which it does.

PK (00:38:51):

All machine learning is statistical. You have a problem that 99% is great for machine learning, six more nines to go for life critical. That is the problem there. 99.999, a bunch of nines, for if someone dies.

(00:39:03):

But ChatGPT type technology is a little different, because it does not lie to you. People think it made a mistake and it lies. That is not at all what is going on. That is projecting human thought onto it.

CW (00:39:16):

Right, right. It is the anthropomorphizing. Yes.

PK (00:39:18):

Right. What is really going on is there is a thing called "Frankfurtian <censored>." Yes. Okay. The definition of that is you are saying things with a reckless disregard for the truth.

EW (00:39:31):

What was the first word there?

PK (00:39:32):

"Frankfurt." There is a guy named Frankfurt. It is the author's name. There is a book on <censored>. So people say, "Frankfurtian <censored>." What they mean by that is that this is a precise term being used in a clinical way, and not just saying someone is acting badly.

(00:39:51):

So it is not malicious. It is not malicious intent, necessarily. It is that it is reckless disregard for the truth. So think about "Mad Libs" with randomized responses. They may make sentences that make sense, but truth does not enter into it.

(00:40:10):

The thing is, everything generated by a chatbot is BS. It is just statistically consistent with language use, but it does not actually know or care if it is the truth. Now, many of the things it will say look true to the reader, but that does not mean it knows they were true or false.

(00:40:36):

So it is not saying, "I think I will say something false." It is just, "I am just going to generate stuff." And the truth is in the eye of the beholder. If the beholder is sophisticated, they may decide it is not true more often. If they are unsophisticated, they may decide it is true. They may get sucked in, because it sounds authoritative, very confident.

(00:40:58):

It is even worse, because it will just make stuff up out of thin air, that could be checked, but it does not. It will make up web pages that do not work. Or it will say, "Here is a web page I am summarizing for you," and it will miss the context of the web page. And it is very persuasive. So it is really tough technology that way.

EW (00:41:18):

We talk about the hallucinations, that it makes up web pages that do not exist.

PK (00:41:23):

But "hallucination" is the wrong word.

EW (00:41:25):

But we forget that it is all hallucinations.

PK (00:41:25):

Yeah. "Hallucination" means it is capable of knowing the truth, which it is not. Right. That is why I prefer the BS term, instead of hallucination.

EW (00:41:36):

Chris is very anti LLM.

CW (00:41:40):

Although I do check in once a month or so and try them, just to see what is going on. But I try not to use them.

PK (00:41:46):

I know someone who uses them to write emails to- That this person is a landlord, and they use it to write polite emails to problematic tenants. And it is really good for that.

EW (00:41:57):

Yeah.

CW (00:41:57):

That is to write- Yeah, the obsequious tone is useful for certain applications.

PK (00:42:02):

Yeah. It is like, "This person is a problem," but you want to be polite about it. It is like, "How do you tell someone to stop doing something they should not be doing, in the politest way possible?" It is pretty good at that.

CW (00:42:14):

But you were going to say something else, Elecia.

EW (00:42:17):

I have had many bad experiences with it, with coding. Then time went by and someone suggested I use it to think about some physical systems. A physical system that I did not understand well. And it worked.

(00:42:40):

I did not just say, "Tell me about this." I said, "Tell me about this thing I already know," and then I walked it into the system. Then I found things that I could ask it to do. "Show me the state space equations for a PID controller." I know how a PID controller is set up in frequency domain. I know how to set it up as a code.

(00:43:06):

But the state space, the whole part of that math is kind of new to me. So I wanted to see how it looked, and compare it to some other state space I was looking at. It was all- It was fantastic. It was really nice. And it was stuff I could recognize as being true, because it was, again, a step away from where I was.

PK (00:43:30):

Well, if this is well trodden territory, if there is lots of material out there on it, it often does a pretty good job. It struggles more at novelty. So if you want a routine task, it is pretty good.

(00:43:43):

But even then- I try it once in a while. I said, "Well, if I wanted to write an article or a book on topic X, what would I do?" and it produces this long list. Or, "I want to do a course on intro to embedded," or something in which you are doing. It will put out a list that is pretty plausible, 80%, 90%. And occasionally there is this thing in the middle that just makes no sense at all. Why does it even say that? And then it will go back to making sense.

(00:44:13):

But the other thing I have noticed is a lot of what it does is kind of soulless. It is really plain vanilla. Now if vanilla is what you want, that is great. But it misses things that are important, but niche. It goes for the average.

(00:44:31):

And the things it writes do not really have a voice. Now, if you are writing corporate emails to unhappy customers, that is probably exactly what you are looking for. So it depends on the application.

(00:44:42):

Except my interaction with chatbots on websites is nothing short of infuriating. Because I probably would not be asking them for help, if I had something that was on their webpage.

EW (00:44:55):

Oh. Yes. I totally-

PK (00:44:57):

They just send me back to their webpage. It all comes down to, "What are you doing?" But personally, the way I look at it is, if you want to use a chatbot, you have to ask yourself- I am going to go back to my safety background.

(00:45:08):

You have to ask yourself, "What is my hazard and risk analysis for using the chatbot? What are the things it can do, that are going to cause me a lot of loss if it gets it wrong? And what is my mitigation? Is my mitigation going to be effective?"

(00:45:22):

Keeping in mind that if my mitigation is, a person who is bored out of their mind is going to look at it, they are not going to be very effective. Because they are going to get bored out of their mind.

EW (00:45:33):

You said "things that do not change" are often good. And that is where I have found some goodness. Because then I went back and tried to do some Python script stuff, and it just was totally- Because the libraries it gave me were deprecated.

CW (00:45:49):

Which one were you using?

EW (00:45:51):

I was using Gemini.

CW (00:45:51):

Mm-hmm.

PK (00:45:52):

What is the "things that do not change," is things where there is a whole lot of material about it from 20 different ways, and it can synthesize an average path through.

EW (00:46:00):

Right! Most of what I wanted was physics and systems engineering, which-

PK (00:46:04):

That has been around a while. Yep.

EW (00:46:06):

Was wonderful. Because it would explain it to me three different ways. I could push variables and it would do all of the algebra for me, and that was wonderful.

CW (00:46:14):

But you are an expert in the field. You have seen this mathematics. You have enough intuition to know when it probably was making a mistake, if it had occurred, which maybe it did not. But most people are not using it that way.

EW (00:46:28):

The people who are using it to be friends with it-

CW (00:46:31):

Even I find myself-

EW (00:46:32):

Are probably setting themselves up for the worst fall.

PK (00:46:37):

Yeah. That use is scary. Let me add one thing about the using it. There is a thing called "anchoring bias." So to really use machine learning, you have to bring back your freshman psych, if you ever took it, and maybe go a little beyond.

(00:46:50):

This thing called "anchoring bias." So if you say, "Hey, ChatGPT. Give me a starting point and I will edit it." You are anchored in the narrative it gives you. The narrative it gives you may be vanilla or it may be just wrong. It may send you barking up the wrong tree.

(00:47:06):

I have had several times where I tried to use it, and it tried to anchor me in something that was just at the wrong path on the wrong mountain. Any amount of refinement was not going to get me there.

(00:47:16):

You have to be very self-aware of how people have bugs in their thinking, if you want to think about it that way. We have biases and gaps and failure modes in our thinking. And if you are not aware of them, ChatGPT will lead you right down the path.

CW (00:47:36):

I think one side note about that, is I think there is something wrong with the user interface to a computer or a system being conversational. I think that does something to us.

(00:47:50):

If it is pretending to talk to you, pretending to be a person. Or not pretending, but it is in the role, it is a text interface, it is conversing with you. You go back and forth, ask it questions, it asks you questions, it suggests things.

(00:48:07):

It feels like you are talking to a person. To the point where I find myself, when I do use it, getting annoyed and starting to personify it, and yell at it and stuff. And it is like, "This is not healthy way to interact with a computer." If I am going to yell at my computer, I do not want a yelling back.

PK (00:48:21):

It is seducing you into thinking you are talking to a person.

CW (00:48:23):

Exactly.

PK (00:48:23):

Which is part of why they can sell the technology. I have a point on this, which is the Turing test. Remember the Turing test?

CW (00:48:31):

Mm-hmm. Yeah.

EW (00:48:31):

Yeah.

PK (00:48:32):

Way back when. For those three out of all your listeners who have not heard of it, the Turing test is the idea, way back to Alan Turing, that if you could type over a teletype machine back in the day, and you could not tell whether the person on the other side was a computer or machine, then it might as well be a person for practical purposes.

(00:48:53):

This was an early proposed test for sentience. Everyone has known it had problems. What is not as widely known is even back in the 1970s, people knew that this was not going to work out.

(00:49:06):

There was a program called "ELIZA," which I remember playing with when I was in college. That was a chat program. But you could look at the source code and it was just a bunch of hacks. Just, "If you use this word, use this word."

CW (00:49:17):

A lot of if, then, else. Yeah.

PK (00:49:18):

If, then, else on the words. It was just mostly regurgitate back with some word changes.

CW (00:49:24):

"How does that make you feel?"

PK (00:49:25):

"I," "you" and "How does that make you feel?" Right. So okay, fine. But what they found was some small fraction of people thought it was a real person. It would take- Depending on the person, it might take you a long time to figure out it was not a person. Depending how- A lot of computer folks love breaking things, so it did not take them long. But ordinary folks, normies might take a while to figure this out.

(00:49:49):

So the observation, and I am certainly not the first one to make this, is that the Turing test is not a particularly effective test of intelligence. But it is a really good test of gullibility of the person.

CW (00:50:04):

Yeah. That is just how we are. <laugh>

PK (00:50:06):

Yeah. Right.

CW (00:50:08):

It is not an indictment on those people necessarily. It is just this raises a flag. This is something to be worried about.

PK (00:50:14):

That is right. Well, a lot of this is just exploiting- It is sort of hacking human nature. A lot of what is going on with AI today is hacking human nature. And if the forces doing it are interested in nothing but accumulating wealth, then that is probably not going to be good for a lot of people.

EW (00:50:33):

You wrote this book "Embodied AI Safety" for folks like CEOs, and other people who are interested in the technology as a whole.

PK (00:50:46):

Yeah.

EW (00:50:46):

Those people all seem to love the LLMs. Are you getting any pushback on this?

PK (00:50:55):

I have not gotten any pushback on this book, which is I guess remarkable. Maybe they are either in denial or just has not got to them yet. I have been really pleased. It has gotten a lot of traction, right out of the gate, quite a number of books moved.

(00:51:12):

I am getting pretty good feedback. Lots of folks are saying they really enjoy reading it. Some are saying they have learned new stuff. Others like you are saying, "Well. It is sort of stuff I knew, but it gives me hooks to hang things on."

(00:51:24):

But as you get towards the back of the book, there are a lot of things about, "We have to change how we think about safety, because of AI." Because there is not a person responsible, things like deciding when it is okay to break the law, which is an everyday thing that everyone does this. Do you trust a machine with that?

(00:51:48):

Now engineers have to learn liability, because they have to decide- If you wanted to follow traffic rules perfectly, you would never get anywhere.

(00:51:57):

There is a tree that fell down in their lane. Do you cross the double yellow line to go around it? Well, it depends. As long as you are being responsible about it, it is okay. No police officer is going to give you a ticket for going around a tree in a thunderstorm, if nobody is coming and everything is clear and you are being careful about it. They are not going to go after you for that.

(00:52:18):

Or even- There are a bunch of things like this, where there are social norms and there is reasonable behavior. All the rules are in light of a reasonable person, and taking accountability that if something goes wrong, you are accountable. You cannot hold a computer accountable. How do you do that?

CW (00:52:36):

It is that old thing, right?

PK (00:52:37):

Right. Right.

CW (00:52:38):

Computers can never make a management decision, because a computer cannot be held accountable.

PK (00:52:41):

That is right. So do you let a computer make a decision, which we would let a human make a decision for, knowing that the human would be held accountable, but the computer is not.

CW (00:52:53):

It would be better if the computers were sentient, because then we could just sue them.

PK (00:52:56):

Well, then they would care if they went to jail, but they do not.

CW (00:52:58):

Exactly. <laugh>

PK (00:52:58):

<laugh>

CW (00:52:58):

Exactly.

PK (00:53:01):

So part of the book is talking about how the world changes, when there is AI supplanting human agency and operation. Now all of a sudden there are all these things you have to do. It really fundamentally changes how you have to think about safety.

(00:53:14):

The last chapter- Because I do not want it to be a book about how everything is broken. The last chapter talks about if you want to build AI- I do not care if it is robotaxi, or it is a power plant control system, or it is a medical device, I do not care.

(00:53:30):

You need to build trust with people, because you are taking away human agency and asking them to trust a machine, which fundamentally does not care if it goes to jail, to make life or death decisions potentially. How do you do that?

(00:53:46):

It talks about how you can build trust in ways that are other than, "Trust us, bro. We are going to save lives ten years from now. So never mind the person we killed last Tuesday." Because that is right now, that is the industry approach is just a, "Trust us, bro" approach.

(00:53:59):

Which is a problem, because there is no technical basis to know this is actually ever going to save lives. And that is not required. What is required is the companies have to be responsible. If you put something on a public road, I want to know you are a responsible road user. Not that you are making promises that cannot be disproven for another ten years.

EW (00:54:20):

One of the examples you had in that later part of the book, was a car, autonomous car, that needed to stop. That had a critical error, had decided that it was time to stop and that the best it could do was stop where it was. Which, okay, let us say it ran out of gas, or its battery fell on the ground, or something happened.

PK (00:54:43):

There was the time the engine fell out of my car, so I desperately stopped then. That is a true story. Yes. <laugh> Yeah, sometimes there is nothing you can do. That is correct.

EW (00:54:52):

That is a perfectly fine assessment and thing to do. Except, you do not do it in front of a firehouse, in front of where ambulances or fire trucks will come out of.

PK (00:55:09):

If you have a choice. This is the trust erosion game that goes on. They will say, "Well. If something goes wrong, we stop. Because we care about safety." There were robotaxis parking in firehouse driveways. What was never really brought out, but it is really unlikely that there was no alternative.

(00:55:29):

If the engine falls out of your car, there is nothing you can do. But the argument is sort of, "Well, it might have been the engine falling out of our car, therefore we have no obligation to do anything better than stop." They were blocking fire trucks. They were blocking ambulances.

(00:55:42):

There was one robotaxi that stopped between firefighters and a burning car, so they had trouble getting at the burning car to put out the burning car. Just crazy stuff.

(00:55:55):

But there is a standard theme that comes out in the book, which is, "How do you judge how good it has to be? Cannot be perfect. Nothing is perfect. How do you judge how good it has to be?"

(00:56:06):

The really unsatisfactory feeling conclusion I came to, is on a case by case basis, you need to compare it to how a competent and careful human would have done. So if a human-

EW (00:56:19):

But not a perfect human.

PK (00:56:20):

Not a perfect human. Right. But if an ordinary human driver who is not drunk and not distracted- So they are already their top 20%. I am making a joke there. So a careful, competent, qualified human driver could have moved the car out of the driveway. Then the robotaxi should have moved out of the firehouse driveway.

(00:56:39):

If your axle broke and your engine fell out of your car, yeah, I agree, there is nothing you can do. But that is a very rare thing. The usual case is, "Of course, you can move another 20 feet." Even if it is at one mile an hour, be out of the way. So if the firetruck has to get out, you are not blocking it for 20 minutes, half an hour, whatever the time was.

EW (00:57:00):

A human at some point will say, "Okay. I have to push this car five feet."

PK (00:57:06):

Or whatever. If there is a police officer screaming at you to move your car, there is a exceedingly small fraction of drivers who will not do that. But screaming at a robotaxi does not do anything. There are photographs of that happening.

EW (00:57:20):

There is no consequence for it.

PK (00:57:23):

Well, that is right. So beyond robotaxis, ask yourself if the AI is supplanting human decision-making authority, and that decision-making authority comes with accountability. So the AI is supplanting it. What happens if the AI gets it wrong?

(00:57:43):

If the answer is, "Sue the manufacturer for product defects," you are basically saying, "Well, unless the loss is more than like a million or $10 million, there is no consequence." Because they will never get the money back from a lawsuit. Then you are basically acting with impunity. That is a lot of where we are right now.

EW (00:58:00):

Yeah. You can sue a company into oblivion, and then they are gone. And now anybody else-

PK (00:58:08):

A, that is counterproductive for that reason. But B, in almost every case, you cannot afford to mount the lawsuit. Because you have to pay your costs and you might lose. It is going to be difficult or impossible to find a lawyer, willing to pony up the cost to run a case for product defect for a one-off mishap. It is very difficult to make that happen.

EW (00:58:30):

So basically you have to kill a whole bunch of people, before it matters.

PK (00:58:34):

Under current law, that is correct. One of the things I would like to see, is the law changed and say, "If the AI system is taking the responsibility the human would normally have, you should hold the AI system accountable for the same degree of care a human would have."

(00:58:50):

So you should be able to sue AI systems for tort negligence liability. Wrongful death, carelessness, recklessness. Just like a person. Except the AI does not care if it goes to jail, it does not have any money. So the manufacturer has to be the responsible party.

(00:59:09):

That may not be perfect. It will not make things perfectly safe. But it will put much better pressure on your manufacturers, to be responsible when they deploy this technology.

EW (00:59:20):

You mentioned before, you have a bit of legal theory for tort negligence in your book.

PK (00:59:26):

I have three law journal papers, co-authored with a law professor, which is three more law journal papers than I had ever hoped to have in my life, I will tell you. <laugh> So yeah, there is a section that sort of summarizes that, and a pointer out to the real, to all the law journal papers.

EW (00:59:44):

I think many engineers have wondered what their exposure, legal liability would be under some of these cases. So that was an interesting part of the book.

PK (00:59:59):

Yeah. That is tricky. We have seen recently some engineers go to jail, or get criminal sentences, for really egregious stuff like Volkswagen Dieselgate, things like this. Although there is some feeling that they were the sacrificial anodes for a more systemic issue. So saying that the engineer will never be held responsible has sort of gone away, but it has still got to be a really, really big deal.

(01:00:24):

But just ethically as an engineer, whether you are going to go to jail or not, you do not really want to be responsible for someone dying.

(01:00:33):

Talking to the engineers who were involved in the Uber robotaxi fatality, yeah, they blame that on the safety driver. But the engineers really were hit hard by that. They really felt that having someone die from your technology is pretty amorphous and theoretical, until after it happens. Then it can really, really hit you hard.

(01:00:56):

So if you are in this technology, it is a lot harder. You cannot say, "Well, we followed the safety standards, so I can sleep at night. Because we did everything we could do. We followed the safety standard."

(01:01:07):

These robotaxi companies, some of them are not following the safety standards. And now if some bad happens, you cannot really say you did everything, can you?

EW (01:01:19):

Having worked on FAA and FDA products, there is always a little more you could do.

PK (01:01:25):

There is. Although there are no safety standards for AI that have teeth in them for those areas now. So if you are working on the products, and the safety standards are immature or do not exist, how do you sleep at night?

(01:01:38):

The first step- Okay, I am here to sell my book. The first step is this book at least tells you how to think straight about it. It is not all the answers. But at least you get a framework to try and reason through the things I have thought of. Do I have a blind spot? At least do not have blind spots, when you are going into this.

EW (01:01:57):

With that, let me bring up the other two sections that I think were super important, the safety and security, which you drew some really interesting parallels around.

PK (01:02:07):

Yeah. I would like to think of security- Now there is the whole IT based security, and credit card thefts, and all that stuff. And ransomware.

EW (01:02:14):

Yeah. Those are interesting. Let us not talk about them.

PK (01:02:16):

Right. So we will set those aside. I am not saying they do not matter. I am saying we are setting those aside.

(01:02:20):

What is left, what is unique about embedded system security in general, is that it is not about encryption. If you say, "I have an embedded system and I am secure because I encrypted, you probably are barking up the wrong tree. Because it is not about keeping secrets. Everyone knows you have your turn signal on. The fact you are about to turn is not a secret. <laugh>

(01:02:43):

It is more about integrity. Making sure that no one has subverted a code. Sometimes it is about the denial of service of the safety shutdown function. There are different security properties you care about.

(01:02:58):

But the overarching framework, the way I like to look at it is, when you are doing safety on a good day, you have a safety case. Which is a well-reasoned argument supported by evidence, why I think I am acceptably safe.

(01:03:10):

If you are going to attack a system that is a safety case, what it amounts to is the attacker is looking for holes in the safety case. It is like, "Oh, they assume this will never happens, so we are going to make it happen. They assumed that the program image that is burned into the chips at the supplier, is the one they sent the supplier. Okay. So we are going to burn a malicious program image, by bribing someone at the supplier."

(01:03:33):

So a lot of the things boil down to holes in the safety case. So that allows you to think about security in the same framework of safety, as hazard risk analysis and risk mitigation. But you have to think not just of stuff that can break, but the stuff that can go wrong on purpose.

EW (01:03:53):

Malicious intent.

PK (01:03:55):

Malicious intent. Malicious actor. Yep.

EW (01:03:58):

Which is so much harder to protect against.

PK (01:04:02):

It is. Again, just like safety it cannot be perfect. But there are a lot of things that have already happened. Did you think about those? What is your mitigation? The mitigation does not always have to be technical, but sometimes it is.

(01:04:16):

You can learn- Just like with safety, you learn with experience. You build a hazard log, list all the hazards, and when you learn something that you put it on. For the next project, you use that new hazard log.

(01:04:26):

Even- There are lots of folks who get frustrated and say, "Well. I cannot be perfect, so I am going to do nothing." And the answer is, "No. You do not have to be perfect. But you should not do nothing."

(01:04:36):

The best you can do is make a list of things to worry about, to think about, and add to it over time. Continuous improvement is okay for safety and security, if where you are starting from, is a pretty robust starting list.

EW (01:04:54):

Okay. Forgive me here. But there is a little whiny part of me that says, "I do not want to worry about this. I just want to make my widget."

CW (01:05:00):

<laugh>

PK (01:05:04):

Yes. I understand. <laugh> All of us go through that phase in life. Some stay in the phase. Some get out of it. Some go back. At some point you say, "This is just too hard. I am going to do hobby projects." "Okay. Cool. That is fine. Please do not sell them at scale, if you have not done safety."

(01:05:22):

That is the stakes of playing the game. If you want to build something that can kill someone, you have to do safety. If you do not want to do safety, go find something to build that cannot hurt someone.

(01:05:32):

The thing that makes it feel so overwhelming is, as computers connect to every part of our society and daily life, more and more things become safety critical. It used to be safety was this niche specialty, that only a few people worried about. Now safety is everywhere. It is because computers are everywhere.

EW (01:05:53):

Well, there is your problem.

PK (01:05:55):

Well. Just turn off all the computers. Everything would be fixed.

EW (01:05:57):

<laugh>

PK (01:05:57):

<laugh>

EW (01:06:00):

Christopher has been wandering through the house in the last couple days, as some of our lights are smarter, and complaining about their failures. It is hilarious. I am just like, "We could just go back to switches."

CW (01:06:13):

It was not the lights I was complaining about.

EW (01:06:15):

Well, it was everything breaking. <laugh>

PK (01:06:17):

Well, I have fun with this. Because I will be in someone's house, and the thermostat is not working. And said, "Oh, I did the code review for that one. Here, try this." <laugh>

EW (01:06:23):

<laugh>

PK (01:06:23):

That happens more frequently than you might think.

EW (01:06:29):

It is funny how many things are safety critical. I worked on children's toys. On one hand, they did not tend to actively hurt other people. On the other hand, you still had to make sure that the kid could not hurt themselves.

PK (01:06:45):

Kids are creative, and they have a lot of time. They have a lot of time to worry about that. Yeah.

EW (01:06:48):

They really do.

PK (01:06:49):

Well, I will just tell you this. As a child, I learned how to reset circuit breakers, without my parents knowing that it had happened. And I will just stop there.

EW (01:06:59):

There are some questions. What were you doing, that required you-

PK (01:07:03):

No. No. No. Not going there.

EW (01:07:04):

No? No?

PK (01:07:05):

No. No, just going to leave it like that.

EW (01:07:06):

I am just going to assume you were licking light switches.

CW (01:07:08):

<laugh>

EW (01:07:08):

Or the light sockets. Or the- Anyway.

PK (01:07:11):

Remember, this is back in the day when lots of appliances had vacuum tubes in them. So we will just stop there.

EW (01:07:19):

Phil, it has been wonderful to talk to you. Do you have any thoughts you would like to leave us with?

PK (01:07:25):

I guess the high level thing is, I think safety is becoming more pervasive. And the advent of AI in everything, is fundamentally changing how you have to think about safety. More people have to understand safety.

(01:07:40):

And it expands the scope. There is safety, there is security, there is human-computer interaction, there is machine learning, there is legal stuff.

(01:07:47):

You do not have to be an expert. It is not reasonable to expect everyone to be an expert. But if you are literate in all those things, you are much less likely to have a bad surprise, when you create a product and deploy it. And realize, "Oh! There is this huge hole, because I did not understand the fundamental deal with some of these areas."

(01:08:08):

So I wrote this book "Embodied AI Safety" in part to help people get up to speed, and make sure they do not have any gaps in those areas. And in part to set out a menu of what all the challenges are. What did we learn from the robotaxis experience, that applies to things that are not just robotaxis?

(01:08:23):

So I hope the book is useful to help people get up to the next level of what is going to be happening. Because AI stuff is going to be everywhere. We keep hearing that. And embedded systems are everywhere. And you put them together. It really changes things in a much more fundamental way, than a lot of folks have come to appreciate yet.

EW (01:08:44):

Our guest has been Philip Koopman, Embedded Systems and Embodied AI Safety Consultant, Carnegie Mellon University Professor Emeritus, and author of "Embodied AI Safety."

(01:08:54):

Physical copies are available wherever you get your books. The e-book will be ready for Christmas. If you would like an hour long preview, Philip has a great keynote talk about the topics covered in his book. There will be a link in the show notes to that talk.

PK (01:09:09):

Thank you, Philip. It is always great to talk to you.

(01:09:11):

Thanks for having me on. It has been a pleasure.

EW (01:09:14):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon supporters for their support. And thank you to Mark Omo for pre-chat and some questions he contributed. Of course, thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.

(01:09:32):

Now quote to leave you with, from Vernor Vinge's "The coming technological [singularity]: How to survive in the post-human era." Vernor says, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." He wrote that in 1993, so we are running a bit behind.

CW (01:09:54):

<laugh>