Embedded

View Original

431: Becoming More of a Smurf

Transcript from 431: Becoming More of a Smurf with Jasper van Woudenberg, Chris White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Have you taken a look at the Hardware Hacking Handbook? I hope so. This week we will be talking to Jasper van Woudenberg.

CW (00:00:18):

Hey, Jasper. Welcome.

JW (00:00:21):

Thank you, Chris. Thank you, Elecia.

EW (00:00:24):

Could you tell us about yourself, as if we met at an Embedded Systems conference?

JW (00:00:32):

Sure. I am Jasper from Woudenberg, as we say in Dutch, or Jasper van Woudenberg, as we say in the US. I am originally from Holland, but now living in the Bay Area with two children, one wife and three bicycles.

CW (00:00:47):

<laugh>

JW (00:00:47):

I have been programming since I was about eight years old. Hacking software since my early teens, and hacking hardware since my mid twenties, with my company Riscure. I am specifically interested in side-channel analysis and fault injection. And recently more interested in how we use pre-silicon simulations to root cause vulnerabilities on chips early, so they can actually be fixed before they get baked into a chip.

(00:01:17):

I am generally a curious person, so I would like to know how things work on the inside, which is not uncommon for reverse engineers, I guess. I also like to challenge myself to do things that I have not done before, sometimes maybe things that I am a little scared of.

(00:01:34):

And, as Elecia already introduced, last year Colin O’Flynn and I released the Hardware Hacking Handbook, which weighs in at two to the ninth pages and took us about seven years to complete. I like to flirt with AI. And, in contrast to my former self, I try to avoid caffeine daily and try to get nine hours of sleep.

CW (00:01:58):

<laugh>

EW (00:02:01):

All right. We are going to talk more about the Hardware Hacking Handbook and hardware hacking, which should not come as a shock. But for now, we want to do Lightning Round. We will ask you short questions and we want short answers. And if we are behaving ourselves, we will not say, "Why?" and, "Are you sure?" Are you ready?

JW (00:02:19):

Let us do it.

CW (00:02:21):

When somebody finds out what you do, what question do they always ask you?

JW (00:02:27):

What is the latest thing that you hacked?

CW (00:02:32):

What is the latest thing that you hacked?

JW (00:02:34):

Yeah.

CW (00:02:36):

No, no, that is what I am asking now. <laugh>

JW (00:02:38):

Oh, that is the next question. Okay. Usually that is confidential.

CW (00:02:47):

Oh, okay. That is-

JW (00:02:48):

I have been hacking a lot of things in simulation lately, so I guess the answer would be a simulated AES core.

CW (00:02:57):

Okay.

EW (00:02:59):

What is something that a lot of people are missing out on, because they do not know about it?

JW (00:03:04):

I think a lot of people are missing out on internals of how systems work, just opening up stuff and looking what is on the inside. I think it is a wonderful world.

CW (00:03:17):

Which Sesame Street character best represents you?

JW (00:03:22):

<laugh> I was not a big watcher of Sesame Street. I guess-

EW (00:03:31):

You can choose any Muppet.

JW (00:03:33):

Yeah.

CW (00:03:34):

<laugh> Does not help.

EW (00:03:35):

<laugh>

CW (00:03:39):

You can pass.

JW (00:03:40):

I will pass on this one, yeah.

EW (00:03:43):

Do you have a favorite video game?

JW (00:03:47):

Ooh, I have not played video games in a long time!

EW (00:03:49):

<laugh>

JW (00:03:53):

I used to really suck at video games, which is why I got into hacking.

CW (00:03:57):

<laugh>

JW (00:04:00):

Let us say there was this game in my mid teens, which was called, I think, 4D Drive, and I liked playing it so much that I wrote a level editor for it at some point.

CW (00:04:13):

If you could teach a college course, what would you want to teach?

JW (00:04:16):

Oh, definitely something on hardware hacking.

EW (00:04:19):

Do you have a tip, everyone should know?

JW (00:04:23):

Saying, "I do not know." when you actually do not know is a superpower.

CW (00:04:28):

<laugh> It is very hard for some people.

JW (00:04:31):

Oh, I know. Yeah.

EW (00:04:34):

So the book, Hardware Hacking Handbook. Your co-author was Colin O'Flynn of ChipWhisperer fame. What was it like working together?

JW (00:04:47):

Well, I think Colin, even for a Canadian, he is friendly.

CW (00:04:52):

<laugh>

JW (00:04:58):

He has an academic background, so he is still critical where needed. So he was a really good sparring partner to think about what are we going to put in the book? What are we not going to put in the book? Feedback on text. I think that really helped. And of course, we both are really interested in this topic and passionate about it, so that is always a good binding factor. He was also persistent enough to work through years of writing with me. Yeah, I think in that sense it was a really smooth partnership, if you will.

EW (00:05:36):

You said it took you seven years to write the book. Did the landscape of hardware hacking change in that time?

JW (00:05:48):

Oh, <laugh> that is a more painful question than you realize. There were certain sections of the book that we had to rewrite probably three, four times as things progressed. So the answer is definitely yes. Trying to come up with a good example of this. One thing that remained mostly stable at least, were some of the basic techniques. The book is focused on people who are trying to get into the field of hardware security. So a lot of the basics did not change.

(00:06:29):

But there were a few things, like for instance, deep learning applications in side-channel analysis that all of a sudden came up. I know the Spectre and Meltdown vulnerabilities came up during the writing of the book. So those are all things we were like, "Oh! We better go back to do some more edits, and make sure that we cover some of that as well."

(00:06:52):

We also realized that if the speed of writing is actually lesser than the speed of the world that moves around you, you may actually end up in an infinite loop of edits and never submit a book. So, near the end, we had to push on the gas a little bit to make sure we got the book out and could call it a version 1.0.

EW (00:07:18):

How did you decide you were done?

JW (00:07:22):

Ooh, that is a good question. Honestly, I think at some point, you just want to move on and get it out there. So I think we were happy with the content that we had. We had a few tough phone calls together where we are like, "Nope, we are not going to add this new thing, because it is going to extend the timeline again. We know it is really cool content, but there can always be a v2." I think that is at some point, the thing that started going through our heads. This is not a promise, by the way, but at that point it was a way for us to close things off and call it done.

EW (00:08:08):

The first section of the book, chapters one through three, were all an introduction to embedded systems.

JW (00:08:17):

Yep.

EW (00:08:19):

What made you put that in? Why did not you just start with, "Okay, let us get to hacking".

JW (00:08:23):

We assume, you know all of this.

CW (00:08:25):

<laugh>

JW (00:08:25):

Do you think we should have left that out?

EW (00:08:30):

No, I actually liked it quite a lot, but it was definitely an introduction to embedded systems as a small course, and I was surprised to find it there.

JW (00:08:41):

Yeah. Most things in the book are very deliberate decision of us to put that there. And we are really aiming this at the beginner, meaning somebody at early college level, maybe no hardware experience at all, or embedded experience at all, and trying to give these people just enough information to understand the more hacking sections of the book. Just to make it more of a self-contained book of this journey of, "Okay, let us first learn a little bit on how these embedded systems work, and then we can start poking at them."

EW (00:09:35):

Some hacks are more easily accomplished through things like logic analyzers, and some are more easily accomplished by things like ChipWhisper, where you are doing differential power analysis or things like that. Which one of those is more interesting to write about?

JW (00:09:59):

More interesting to write about?

EW (00:10:03):

I guess the question is also, is it more interesting to look inside the chip, or more interesting to look inside the hardware?

JW (00:10:09):

I think they are both interesting fields. I think they are very different disciplines though. When you look at logic analysis, it is much more about reverse engineering the ones and the zeros that you see, like interpretation of that, which can be a fascinating field in itself.

(00:10:30):

Once you start looking inside of the chip with side channels, et cetera, you have these very noisy signals that come out and the puzzle that you had with logic simulation all of a sudden has a couple of extra dimensions. You need to make sense out of this noise. Do we need to first average a bunch of traces to see if we can get something? Do we need to do some frequency filtering? What is actually the operation that is going on?

(00:11:00):

If you look at it from a perspective of somebody who is beginning hacking, definitely start with the logic side, build up those reverse engineering skills, and then you go deeper into the chip, where you start dealing with these 4D chess type problems.

EW (00:11:21):

It definitely is a 4D chess type of problem. I agree that you need the basics before you look deeper into those- like trying to find the keys inside of a chip. It is so much harder than trying to find what the chip is saying to this SPI flash.

JW (00:11:40):

Definitely. Yeah.

EW (00:11:43):

Your chapter titles were very amusing.

CW (00:11:47):

<laugh>

JW (00:11:49):

I am glad to hear that.

EW (00:11:50):

"Casing the Joint: Identifying Components and Gathering Information." How much time did you spend procrastinating choosing titles versus writing the material?

JW (00:12:03):

<laugh> I think unfortunately vanishingly little. It would be a nice story if we could talk about. No. I am actually trying to think back on the history of that. What a lot of people might not know is, I think we started off this book with seven authors or so.

EW (00:12:26):

Wow.

JW (00:12:32):

Authors with way bigger names and claims to fame, than Colin and me. Colin and I were the only ones who were silly enough to make it all the way to the end. One of the other authors was Joe FitzPatrick, who is also well known in the hardware security field. I think he might have kicked off the trend for- He also donated one of the first chapters on signaling and logic interfaces. I think he may have started the trend, and then Colin and I just rode that wave for the other chapters as well. I would say probably 80% of the the titles is Colin, though. He was much better at this than I was.

EW (00:13:26):

The book goes through different ways of hacking hardware, as to be expected, but there was one chapter about countermeasures. What was in it and what would you add to it now?

JW (00:13:43):

There are a couple of chapters where, and the countermeasures one, is informed by Colin and my experience of explaining hardware hacking to people interested, customers, et cetera. Eventually it is nice that we can find vulnerabilities, but the question is for us always what do we do about them?

(00:14:10):

It turns out that things like side channel and faults as well, they are fundamental issues in the way the chips are built. It is not a bug that you just fix and then you move on. You actually have to actively counter these physical properties that we are exploiting. So the trigger for the chapter was people asking, "Okay, now what? How do I fix this?"

(00:14:40):

In the chapter we go through a number of sort of coding patterns. It is focused more on software developers. With software, I mean firmware as well. I know there are people who have different implement- This is not web stuff, right? This is embedded system programming. So software that runs on an embedded system, and almost a design patterns idea of what are the countermeasure patterns that you can do. So for instance with fault injection, you try to flip some bits in a chip that is actually executing some program.

EW (00:15:30):

Like if you were playing pinball and you bumped the table.

CW (00:15:36):

<laugh>

JW (00:15:36):

Exactly. Yes. <laugh> It is funny you bring it up. That is one of my favorite go-to examples in some of the presentations I did. Because the analogy goes even further, like why would you bump the pinball table? It turns out that historically, you had these pinball tables where there was no tilt sensor, back in the day, and it was fine. People were just playing with the thing.

(00:16:04):

At some point when there was a monetary payout attached to it, in some gambling halls I guess, or casinos or wherever they were, then it became interesting to start bumping that table, right. To keep the ball in. That is when the tilt sensor was introduced. So it was really this cat and mouse game, or attacker defender, where all of a sudden there was was a monetary incentive to start kicking that table as much as you can. And then the countermeasure was, "Let us put a tilt sensor in there."

CW (00:16:39):

Yeah. And then you would try to figure out how far you could go. <laugh>

JW (00:16:43):

Exactly. <laugh> And this applies one to one with fault injection on the meta systems. Right? Why would you fault the system that does not hold any value? Well, you probably will not, or maybe you are just doing some...

CW (00:16:56):

But what is the equivalent of the tilt sensor <laugh>?

JW (00:17:02):

If you go back to the countermeasures, I guess the most direct equivalent would be, there is sometimes in more secure chips, they actually built in sensors to detect faults. This can range all the way from sensors that monitor the VCC, basically the power line, to see if there is glitches coming in from that side.

(00:17:25):

More advanced chips may actually have optical sensors in the die itself. So if you open up the chip, expose it and start hitting it with lasers, you accidentally hit the light sensor, then it knows it is under attack. Those I think would be the most direct, how do you say that? The most direct equivalence?

CW (00:17:56):

It is very interesting. I have never heard of the photo sensor inside a chip before. It is an obvious thing to do, but it is like, "Oh, okay. That is kind of crude." <laugh>

JW (00:18:10):

When you design a chip, it is actually expensive, right? The area, it takes away from performance. That is why it is not in all chips, right? These are chips that are in smart cards, that are in the bank cards, or the credit cards with the chip that we carry around nowadays. They will probably have these kind of features, because there it makes sense.

CW (00:18:29):

So you got to do your fault injection in a dark room then.

JW (00:18:33):

Yeah. <laugh>

EW (00:18:34):

Or not use a laser.

CW (00:18:35):

Or use an IR laser.

JW (00:18:37):

Laser? Yeah. Or you look at the die. You try to avoid the positions where the laser sensor is.

CW (00:18:48):

Amazing. That is another way you can do that.

EW (00:18:53):

I have a listener question from Benny, asking about in many of the risk calculation systems, physical attacks are generally kind of low on the feasibility scale. Is that true? Or is it a misconception about what hardware hacking is?

JW (00:19:13):

Yeah, that is a good question. The first thing that I always like to say is, if you have a system that is attached to the internet, the first thing you got to worry about is software vulnerabilities. Let us be honest, right? It means that anybody from anywhere in the world can probably reach your system, break into it.

(00:19:34):

With most hardware attacks, and I am putting a little asterisk on that, that I will get to in a second. With most attacks on hardware, you have to be physically present. So that is where you need to start thinking about your risk calculus, if that is indeed the case. There are mobile phones, Wi-Fi routers, there are smart cards. These are all systems where the attacker can potentially get physical access to something. Of course, they still cannot scale things like they would on the software side, on the internet side of things.

(00:20:16):

But scale is not always what is necessary. Sometimes, there is some valuable data on one particular device, and if you get it off you win. Let us say, a Bitcoin wallet, right? If I have hardware bitcoin wallet, and I know there is a million dollars on there and I could steal the seeds off of there, that is not something that I need to scale. I just need to do it on one device and then I am going to go live in Rio for the rest of my life. Well, maybe not a million dollars, but anyway.

(00:20:46):

So scalable attacks always come first. Shortly after that, hardware attacks come in and I think what people do not often realize is that if you do not protect the system against hardware attacks, like I mentioned earlier, they will be vulnerable. With a ChipWhisperer, you are going to be able to break into it and get some data out that you may not want to get out.

(00:21:23):

I left a little asterisks earlier. I said, "Most hardware attacks", and I need to actually be precise in what I say. Most attacks that exploit hardware vulnerabilities, are ones that you have to do locally. So you have to be physically present with an EM probe or with a laser system or ChipWhisperer or whatever it is. In the last few years, there have been attacks on hardware that do side-channel analysis or do fault injection remotely. The way that that works is actually quite interesting.

(00:22:05):

Colin had a publication on this. On the side channel side, he managed to find a system that had an ADC in it, and that ADC could measure the power consumption of the device itself. So you could remotely, let us say hypothetical situation, you remotely have some sort of shell access to this system. You do not have root access, but through this ADC you can measure the power consumption of the device. So you can do, I do not know, you can start...

EW (00:22:42):

All of my devices measured their power consumption so that I know what the battery is doing, so-

CW (00:22:46):

Yeah, but you do not have shell on all your devices.

EW (00:22:50):

That is true.

CW (00:22:50):

<laugh>

JW (00:22:51):

Yeah. I do not know about all your devices, obviously. I think the critical part here is that you need to be able to sample relatively often per second. Right. If you look at the battery once a second or so, if that is the max sampling rate, then it is not going to help side-channel analysis much. But if you can do this 10,000 times a second or a hundred thousand times, you can get really fine readings. You might be able to do side-channel analysis that way.

(00:23:18):

Similarly, there have been attacks where you exploit the fact that modern chips, they dynamically adjust their frequency and their input voltage, right? In order to maintain battery life. If you can simultaneously drop the voltage and increase the frequency, you may actually be able to push the system outside of its normal operating range. And the effect is faults. Bits just start flipping once that happens. If you get enough control and that is what these attacks, or at least the publications on these attacks, have shown, you may have enough control to start injecting faults in a system remotely that way as well.

CW (00:24:09):

You are effectively overclocking the chip at that point, right?

EW (00:24:12):

And causing a brown out.

CW (00:24:13):

Because the voltage is too low to maintain its...

JW (00:24:17):

Yep, exactly.

EW (00:24:20):

There is the class of physical attacks that lead to software attacks. Things like reading out encryption keys that may be used on a firmware update. You read it out on one system and now you have the keys for everybody. Depending on the security of your system, but let us just say that that happens. How many other attacks are like that, where you attack the hardware, but that gives you the ability to attack a larger attack surface? How many times did I say "attack" there? Was it 12?

CW (00:24:54):

<laugh>

JW (00:24:56):

I am used to it, but <laugh>, yes. In quite a few scenarios this happens. One of the great examples, and we covered in our book as well, is the PlayStation 3, where the attack basically started with a hardware attack. They managed to dump some of the hypervisor and the code that is normally not accessible, and find vulnerabilities in there that then later could be exploited through software means. That is a one typical scenario.

(00:25:35):

Another thing that some of our customers are concerned about is, "Okay, I dump the firmware. I want to keep my firmware proprietary. I do not want people looking at it. That is a choice." Fine. Once you dump it, you might be able to use that, as a competitor, for those products.

(00:26:01):

We are seeing that more and more also with neural networks nowadays. There are large investments into, let us say, a neural network for self-driving, right? There are billions of dollars being poured into the design of these networks. There is quite a bit of financial stake for these companies to protect that. If you can dump that through, you know, getting access to these systems through a glitch, and then just do a memory dump, and getting this information out, these are the kind of scenarios that these companies are trying to mitigate.

EW (00:26:37):

What is required to take a physical attack like this from an evaluation board or a lab setting, and put it towards a production system. Is it very different?

JW (00:26:52):

It is basically the difference of white box versus black box hacking. Like, do I have access to all the internals or don't I? The difference is usually that, where you do not have access to the internals, much more trial and error is involved. Let us say that I have a system that I need to- In the lab we would get a system and it is like, "Okay, here is the program to exercise the AES core. Here you can program the key. Please do your lab analysis and then get back to us on how many side channel traces it takes to get the key out."

(00:27:33):

Somebody who does not have that kind of access may actually have to figure out, "Okay, so how do I even exercise this AES core? What is the protocol like?" These are all things that in security we consider, like something that you can overcome. It is not a secret per se, but it takes some effort to get through. So that is the main difference between the lab and reality, is just much more guesswork and patience is needed on the outside of the lab.

EW (00:28:14):

You mentioned lasers, and we have talked about ChipWhisperer a little bit, but a lot of these things are really expensive. What is the easiest way to experiment and learn about physical attacks, without spending a large amount of money on lab equipment?

JW (00:28:35):

Yeah, the least expensive option is to look at our book's VM. That is accessible to anybody, even who does not have the book. If you go to the book's website, which is hardwarehacking.io, you can download the VM, and you can in simulation start practicing some of these techniques.

(00:28:57):

After that, I think ChipWhisperer is probably your best option. And after that, you start getting into the type of equipment that we sell at Riscure, which is more the professional grade stuff.

EW (00:29:16):

Although one of the cheap sub $10, sub $20 logic analyzers will do you for a bit. Those are pretty fun.

JW (00:29:26):

Oh, absolutely. My brain automatically goes to side channels and faults. I know some of my colleagues have also done presentations of like sub $50 attacks, I think, "You cannot do this with a ChipWhisperer, but there are some cheap oscilloscopes you can buy." Yeah, I would just Google that. I think it is called "CheapSCAte" or something where SCA is somewhere in the "skate", of "CheapSCAte".

EW (00:29:55):

In the book, there was not a chapter about ethics or responsibilities. If your book comes up as evidence in a court case where people were harmed, will it bother you?

JW (00:30:09):

Yeah, definitely. Obviously when you write things about security and hacking, you know that these can be used for nefarious purposes. That does not make you feel good necessarily. But I still would publish the book, because I still believe that bottom line, this book will do more good than it will do harm. You know, people get killed in car accidents every day. We are still driving cars, because we think that the risk trade-off is worth it.

(00:30:45):

I think this is something you will see commonly in security, right? So the ideas, if we do not publish things, it does not mean that the vulnerabilities are not there, right? It is just going to be harder to know about and therefore harder to fix. By that reasoning, yeah, the goal is really to get the information out there so that, as humanity, we can progress beyond it, and have a discussion on really how can we address these things?

(00:31:34):

So in that sense, it is not- Hardware security is not that different from software security, in the sense that we want to be open about it. There is one sort of caveat there, and that is that patching hardware is kind of hard. So, let us say you have a pacemaker built into your chest and there is a vulnerability in there. Now, what do you do? I am not going to pretend that I have the answer here. What is really important is to at least be able to patch the firmware, right? Because a pacemaker is built into your chest. So doing side-channel analysis or fault injection is going to be hard in the first place. So if you can patch the firmware, at least you can protect that remote attack surface that is on there.

(00:32:35):

I find with hardware, it is- I know software patching is already difficult <laugh>, but hardware patching is pretty much impossible. You end up in a lot of these scenarios that can go two ways. I know one of my friends here in the Bay Area, he runs a company that does cryptocurrency recovery. So what he will do, or his company, is take a wallet, a hardware wallet, for instance, actually break it and then give the seeds to the person who accidentally forgot their PIN and locked themselves out of their retirement fund.

(00:33:18):

Is this...? I think it is good that he is able to return this money to the person with the retirement fund, but he is also relying on the fact that he needs to break hardware in order to do so. Right? Should those hacks be published? Yes or no? I do not know. It is a good question.

(00:33:43):

I do not see... In software, there is sort of this, I would not say industry standard, but there are some companies like Google Project Zero, they say, "We get a 90 days disclosure window, and we inform the vendor and have 90 days to patch." I do not think there is something like that for hardware. I have not seen it at least. Probably because of this complexity of patching, it just makes it a harder question to answer.

EW (00:34:19):

Patching software is probably the largest attack surface that embedded systems have. If you cannot update firmware, then your attack surface dwindles, although it leaves you open to not being able to fix what is there. Is there anything similar with hardware? You said you do not know how to patch hardware, but that something that should make it more difficult to attack, actually makes it kind of easier to attack?

JW (00:34:55):

Yeah, that is a good question. Maybe before we get into that, I think firmware updates are really important. I think that is something that we <laugh> we should have in-

CW (00:35:09):

Yeah, I do not think we are anticipating that-

EW (00:35:10):

I was not saying it should not, just that it is one of those areas that is dangerous.

JW (00:35:17):

Yeah, agreed, right? You need to get that part right. I think the nice thing is, once you get that part right, it is not as bad to get other parts wrong <laugh>. Yeah, so hardware is not updatable. It is not 100% true. There are some things called "ROM patches", which are usually hard to apply in the field.

(00:35:43):

Basically what manufacturers do is, inside the ROM code they have a couple of hooks, or parts of code, that basically say, "Hey, look up in this table that we can patch later, for instance in eFuses, if there is a patch, and then jump into that patch code." So there are some things that you can patch, but it is limited. And generally you do not want to mess with ROMs when they are already in the field, so after manufacturing.

(00:36:13):

Let us forget about that case before, because it does not always even prevent faults injection or side channel attacks. So the question was, "Is there anything in hardware where a security feature makes it easier to attack?" I have seen that where, what I mentioned before is, when you look at faults injection, we can have these countermeasures, and one of the countermeasure strategies is redundancy. So instead of doing a password check once, I am going to do it ten times, and then I am going to make sure that it passes every time. Because now an attacker needs to attack ten times instead of one, which is much harder.

(00:36:59):

What can happen is that now, because you are doing things multiple times, you actually start to leak more information. So your fault injection countermeasure is actually amplifying your side channel leakage. That is not something that you obviously want to have. So there are sometimes these subtle interplays between countermeasures for one technique, that actually make another technique easier to to apply.

EW (00:37:35):

Going back to ROM, many chips these days have read-out protection, which seems great, nobody can read my code, but as Nathan, a listener, pointed out, these always seem to be broken in some way. We had Jess from Oxide here talking about an ST vulnerability. Why is that hard? That does not seem to be hard! <laugh>

JW (00:38:10):

<laugh> Security is hard. <laugh> We can get into the philosophy of that, but I think first of all, being "broken" is a term that I do not like to use in a black and white sense, because in the end everything can be broken. The question is, how much effort do you need to get there? And once you have learned how to get there, how easy is it to repeat on another device?

(00:38:48):

With logical vulnerabilities, usually it is, you take some effort, maybe it is hard to read out the code in the first place, but you figure out a way around it. Once you have done that, you can just repeat it on all devices. That is not something that always holds for things like side channel. Let us say that each device has a unique key, maybe I can tune a system to do side-channel analysis, but if it takes a million traces to get the key out, I have to take a million traces for every device, so it does not scale as well.

(00:39:26):

That is why I always like to think of "broken" in terms of how broken, or how hard is it to get somewhere. Specifically about readout protection, from the logical side, I agree with you that it should not be hard, right? The bugs are being introduced. That happens. Hopefully, those will sort of filter out over the years.

(00:39:57):

Where it gets harder is things like fault injection. So, readout protection is an access control mechanism. Usually somewhere in the chip it boils down to a single bit that says, "Access? Yes or no." And if I can flip that with fault injection, then I can be given access. That one is harder, because now I need to start thinking about, how do I add redundancy to that? It does not become a logical, like I have a bug problem anymore. It is, again, an inherent system property that I need to create countermeasures around.

EW (00:40:35):

You had some questions for us.

JW (00:40:38):

Yeah. I am curious, you wrote a book as well. On embedded system design. What motivated you to start writing the book? And what motivated you to keep going once you were halfway in there?

EW (00:40:58):

The motivation to start was partially a level of ridiculousness. I wanted this book. I had been teaching people, on a small scale, these things. When I was talking to someone else who was in a similar position, we were talking about what we would want this book, this mythical book, that if only we could find it, we would not have to say the same things over and over again.

JW (00:41:28):

<laugh>

EW (00:41:29):

And then he turned to me and said, "You should write that." It was so bizarre of an idea, that I actually pitched it to O'Reilly, and they said, "Yes." As for motivated to go on, that was difficult. It was a tough time in my life. I had gotten very sick, and my mom had died. Then I was homebound and not willing to do anything that did not involve losing myself in writing or working. And even work was pretty monotonous. So I was motivated because I did not want to leave the house anyway. And O'Reilly, my editor, was pretty good about keeping me on task. You went through No Starch. They are a little looser with the editing, right?

JW (00:42:21):

Oh yeah, definitely. Well after year six, Bill the owner of NSP was like, "Well, we should maybe get this book out, guys." And Colin and I were like, "Yeah, I think you are right." <laugh> He was the one who introduced us to the concept of there can actually be a v2, right? This does not have to be perfect. It is good enough, just get it out. <laugh>

EW (00:42:47):

But now as I am looking at a v2, it seems so daunting. Like, how did I put all of this information together? So I totally see how it could take me seven years to make a v2.

JW (00:43:01):

Well, let me know how it goes, and then we might consider it too <laugh>. Maybe a question for Chris then. What is the coolest compliment you have had about work that you have done? Either about the podcast, or I know you are into music, or professionally...?

CW (00:43:20):

Coolest compliment. Oh, wow. Geez. See, I just take compliments and I do not file them anywhere.

JW (00:43:27):

<laugh>

EW (00:43:30):

But you give him a criticism-

JW (00:43:33):

Maybe where you have had an impact on somebody's life?

CW (00:43:35):

No, I think that the best thing about work that I have probably ever had was, at a medical device company, hearing from people who had been helped by the device. The company I was at, I have talked about it before, we did imaging that allowed surgeons to repair arteries that had blockages in people's legs, was the first application. It is usually a symptom of diabetes, where you get peripheral artery disease. Do not want to trigger anyone, but your legs start having problems because they cannot get enough blood flow, and it is hard to fix. So we had a device that made that easier to fix.

(00:44:22):

We heard from lots of doctors and patients that we had saved their legs. That was always something that made me feel good, because in a lot of other companies it was like, I am making this thing and it is either very abstract or sits in a data center and sure it has an impact somewhere, but it is very down the line. And who knows if the impact is positive or negative overall <laugh>. But this was just something where it was like, something I did has helped some number of people improve their lives, and that was very gratifying.

JW (00:44:58):

Yeah, nice. I can recognize that, or I mean, I can understand that, I should say. I think with security, it can be difficult sometimes to- Just like what you said. We find a bunch of vulnerabilities. I am in the lucky space where at least the riskier customers, they take security very seriously. So the things we find usually get fixed.

(00:45:22):

But it is almost like the insurance world, right? We actually did not change any of the functional properties of the system. We just helped some accident from not happening <laugh>. It is a bit more virtual, and I know that some people in the security industry struggle with that.

CW (00:45:42):

It is tough to find applications that you can point to and say directly, "This is a net positive good for the world. And I know it! I can see it right now." It is been rare in my career and I think Elecia has had- It is been less rare in her career because she actively goes out and seeks those things. It is still hard, because sometimes you go out and seek those things and it turns out to be a red herring. I have had that with other medical device companies, where it is like, "Oh yeah, we are going to help people do this." And it turns out it is not really all that cool, or it is something else entirely.

JW (00:46:18):

Yeah. I have to say, that is one thing I did like about having the book published, is all the positive responses. That is something that when you are doing security projects for professional companies, they thank you and they are happy with the result, but it is much less tangible than, this person who is like, "Oh! Now I understand how this hardware security stuff works!" This is more gratifying feedback, I guess.

CW (00:46:51):

Yeah, that is the educator high too, right? That is one that I have had, and I know Elecia has had, where you teach somebody something and they say, "Oh! I understand this now and I did not before. Thank you." It is like, okay, put a little bit of light into the world.

JW (00:47:06):

Yeah, exactly.

CW (00:47:08):

I wanted to ask you a couple of forward-looking questions. When the side channel attacks started making a lot of news a couple years ago- Four years ago, three years ago, I do not know, time is a flat circle. I paid some attention to it and then I stopped paying attention to it, because it did not really affect me other than as a curiosity. Do you have a sense for what the state of that is now? Have we taken a big performance hit on CPUs and things, because we have turned off a lot of speculative kinds of things?

EW (00:47:47):

Speculative execution?

CW (00:47:47):

Yes. I am talking about speculative execution outside channel attacks.

JW (00:47:52):

Yeah. When you gave the timeline, I thought you were talking about speculative execution. Yeah.

CW (00:47:57):

They both have an S <laugh>.

JW (00:47:59):

Yeah. <laugh>

CW (00:48:02):

Have you been following that? Have those things been addressed in future designs? Or is it like, "No, we take the 10% performance hit. We just do not do those things anymore."?

JW (00:48:14):

I still remember the first time I was reading about this, the speculative execution vulnerabilities are basically a class of timing side channels. I have been dealing with timing side channels for a long time. So that was what piqued my interest. And I thought, "Oh crap! What they are actually exploiting is some hardware optimization." And I was thinking, "Well, this is a hardware optimization. How many hardware optimizations are there in a chip?"

CW (00:48:51):

<laugh>

JW (00:48:51):

I mean, that is what chips are! It is 20, 30, 40 years of optimizations stacked on top of each other. Unfortunately, that is how it panned out, right? Over the last few years, it is one after the another. It is like, "Okay, let us look at the TLB side. Okay. Oh, we got problems there too. Let us look at the branch buffers. Oh, we got problems there too." This is not a surprise, right? Because all of these things, they optimize for particular cases. And if quote unquote particular case happens to have some information that is interesting to be extracted, that can happen. With that said, I have to say, I am not an expert on speculative executions.

CW (00:49:40):

Sure.

JW (00:49:44):

I guess, going back to my former comment, I do not know <laugh>. I do think it is not relevant in all use cases. So in some use cases, where you do not have multi-tenancy on a system, it may actually be really hard to exploit some of these things. In general, I see that with more security bugs in CPUs, they do not apply in all the situations. Then you can make these trade-off choices of, do we just disable feature X, or do we only disable feature X in situations Y and Z, and keep the performance in the other places.

(00:50:46):

That is perhaps a direction we need to go to, where maybe it is safe by default, and maybe if in an HPC data center where all the users are trusted, we turn these things off so we can get some extra performance. I do not know exactly which direction the mitigations are going. What I do know is, like I started off, this is kind of a fundamental problem where we are exploiting optimizations, and optimizations is what makes our CPUs fast. And not too hot. <laugh>

EW (00:51:19):

Is it a losing battle to be a defender? I mean, some of these techniques, if you go back to the most secure thing ever in 1995, it is trivial to break now. Is it always just a matter of, I can only secure it for a little while, if that?

JW (00:51:44):

Yeah, I think...

EW (00:51:45):

This also could be framed as, why do you have the easy job?

JW (00:51:48):

<laugh>

EW (00:51:48):

<laugh>

JW (00:51:54):

That is a personality question, I think <laugh>. Let us stick to the technology side. Well, I just like hacking stuff, that is the personality side. Though I think in the most recent years, I have been shifting more and more, I am turning from red into blue, I guess. I am becoming more and more of a Smurf, because I do not want to be done when I say, "Hey look, this is broken. You go fix it." I want to be part of the how to fix things as well.

(00:52:35):

I think what we have to watch out for is the security nihilism of, "Well it is all broken, let us just give up. It does not matter." Which I completely disagree with. I mean, if we look outside, my bank account has not been plundered over the last years. There is a little bit of fraud on my card every now and then, but I call the bank, that is solved. Tesla cars are not all being hacked and driven to remote areas of the country. So, if you zoom out, it is not quite that bad.

(00:53:15):

My earlier statement of everything can be hacked, that is still true, right? But there also needs to be an incentive for people to hack things. And it needs to be at the cost where they are willing to spend that. We cannot make the risk go to zero, but if we spend our security defense budget in the right areas, I think we can do quite well as a society.

CW (00:53:39):

And I think you are discounting a little bit, efforts of people like you who are trying to get ahead of some of these things. There is a constant cat and mouse kind of thing. As long as there is a lot of effort on the secure side, it is going to mitigate a lot of the stuff.

JW (00:53:57):

Oh, definitely. Obviously I believe that, even though we are maybe not fundamentally solving problems, we are at least incrementally solving some of these security challenges that we have, and the cat and mouse game is an integral part of that. Definitely.

EW (00:54:14):

Reading through your book, I did not get a sense of how I can make my systems more secure, but that is because most of my systems are not worth that level of security. None of them play a part where they are- They are not controlling nuclear sites. If somebody side channels one of my children's toys, that is totally fine with me. That seems like a great place to try it out.

JW (00:54:49):

Yeah, well definitely go ahead and do that, right? We jokingly had in one of the first chapters, we were doing threat modeling on a toothbrush. I think "threat modeling" is actually the key word here, right? Because you are saying, "It really does not matter if somebody does side channel on my toys." That means you have thought about it, which is a fancy way of saying, thinking about security is threat modeling. Which involves thinking about, what are the things we actually want to protect? Against who are we protecting this? Against what kind of attacks?

(00:55:24):

And what I said earlier, you want to spend your security budget there where it makes sense. And toys side channel, go ahead <laugh>, there is no need to do that. If you are launching nukes, then I hope there is more than just a few side channel countermeasures in there, right? I hope there is layers and layers of mitigations around that.

CW (00:55:51):

Well, until a few years ago, they were still using floppy discs. So I think that is a protection in and of itself.

JW (00:55:56):

Yeah. Perfect. Please keep it that way.

EW (00:56:00):

I have one more listener question, from James, which kind of fits here. "How did we fall into the trap of thinking we could control the hardware or software, after we sell it to a customer?"

CW (00:56:12):

Who is we? <laugh>

EW (00:56:15):

How did we as engineers-

CW (00:56:16):

Ah, okay.

EW (00:56:16):

Believe that we could control the hardware and software, after the marketing team sells it to a customer?

CW (00:56:24):

Sure.

JW (00:56:24):

It is almost a question for you two. Do you believe that?

CW (00:56:29):

I do not believe that. <laugh>

JW (00:56:29):

Oh. Okay.

EW (00:56:34):

There is like the tractors that-

CW (00:56:37):

Oh, okay, sure.

EW (00:56:39):

They do not want people- They want to be able to charge more money, so they say you cannot fix your own tractor. On one hand, I understand why they want to do that, not only for the money part, but also for the security part. On the other hand, once I buy something, it is mine.

CW (00:57:00):

Well, yes.

EW (00:57:03):

And you should not be able to tell me I cannot open it.

CW (00:57:07):

Depends on the definition of "buy".

EW (00:57:10):

Well, with the tractors, they tried to change that definition.

CW (00:57:13):

Yeah, exactly.

EW (00:57:13):

And that was not great. I mean, as an engineer, I definitely fall into this. It is like, "Okay, if something breaks here, we will fix it. Or if something is not good here, we will fix it." But that is because I am such a fixer. I do not usually think about, "Oh well, my evil company is now going to sell the data to more evil companies, and they are going to use my ability to update firmware to find out more data, and badness, badness, badness."

(00:57:47):

I can see both sides of, should we control, or should we not control? How do we tell the customer what they are in charge of versus what we are? They are in charge of updating the firmware, because... They do not always, and then they are wonder why they get hacked.

CW (00:58:13):

I think it is a complex question because it depends on what "control" means. It depends on the definitions of a lot of these things. I mean, the first time I came across this was, we had a dumb medical device, somebody had a dumb idea to sell a consumable, and we put authentication chips in the consumable. And, within two months of selling the thing, somebody had hacked it and was reprogramming the consumables to extend their life and stuff. And we never figured out how they did it.

EW (00:58:41):

But your consumables costs hundreds of dollars and were worth dollars.

CW (00:58:48):

They were worth, they cost us 20 or 30, and we charged to 400, I think. Yeah.

EW (00:58:54):

Yeah. So there was an incentive.

CW (00:58:55):

Oh yeah. There was a huge incentive. I knew that from the beginning. As I was implementing it, I was like, "Well, this is going to be a problem, because even though I cannot see how to break this, somebody is going to figure it out." And low and behold they did. And I never figured out how, and we kept making countermeasures and stuff.

(00:59:09):

Even in that situation, I was like, "Well, this is never going to work <laugh>." Somebody is going to figure it out because they have physical access. And once you have physical access, unless you have got a team of commandos coming into your customer's office to haul them off, all bets are off.

JW (00:59:29):

Yeah. Especially in lower cost devices, you are not going to protect against physical attacks. What you get with, for instance, firmware updates, it just really is a double-edged sword, right? On the one hand, you want to make sure that your device is running proper software, especially if it is safety critical devices, right? You do not want somebody to just upload another firmware, that is going to cause who knows what, right? But at the same time, you are locking people out of the devices that they are bought. That is a total double-edged sword. I think that really depends on, from device to device, how you want to make that choice.

(01:00:15):

Going back to the question, I talk to a lot of developers, obviously, when I am doing my work, and I think the majority of them, they do realize that when something is in the field, people can poke at it. I think there is a minority that will sometimes have discussions of like, "Hey, you have this function sitting in your API, that you say is deprecated." Usually the reply is, "Yeah, we are not using that." Then I am saying, "But an attacker can use it." It is there. It is still there.

(01:00:58):

I think that illustrates a way of thinking about how a device is being used, and not always realizing that if you put something in there that may not be the intended use, it can still be used. That is maybe the difference between more of a developer mindset of how should this be used, versus the attacker mindset of how could this be used?

EW (01:01:26):

Riscure, that is your company. What do you do?

JW (01:01:32):

I am the CTO for Riscure North America, and that is a fancy way of saying that I do two main things. One is when our technical teams work with customers, and they have some really difficult questions, then they escalate to me every now and then. So I will join the teams and help them out. The second part of it is looking forward, like, on the innovation side, what are the things we want to put into the market in the next 1, 2, 3 years? Those are the two main responsibilities I have.

EW (01:02:11):

And you are hiring?

JW (01:02:14):

Always. We are always looking for people who are interested in security, do not necessarily need to have a hardware security background.

EW (01:02:29):

You have got a book to train them.

JW (01:02:31):

We got a book to train them <laugh>, we got courses to train them as well. There is quite often a software component, firmware component, to what we do as well. So a lot of people come in starting on that side, and on the job learn more of the hardware stuff.

EW (01:02:53):

I have one last question. Well, I have two last questions for you. The first probably is a whole podcast on its own, but if you had to get an implanted medical device, a pacemaker or a pancreas, how much more would you be willing to spend, if you could look at the firmware and possibly patch it yourself?

JW (01:03:16):

That would be a whole podcast in itself. To summarize, I think that would be very valuable to me. Mostly on the analysis side, I would want to see what it does and whether it could be attacked in some way. I am not comfortable enough myself to start loading code into my own body, in parts that might be harmful to my health. I would like to have trained engineers to do that part. But yeah, I would pay for that for sure.

CW (01:04:06):

I mean, you never see Darth Vader push any of those buttons on his chest thing. You have got to figure he is got people to do that, who know what they are doing.

JW (01:04:14):

Exactly.

EW (01:04:16):

Jasper, do you have any thoughts you would like to leave us with?

JW (01:04:23):

I would say it was great to be on the show. Thank you for having me. I also really appreciate you guys for doing this podcast. Engineering is an important part of progress as society, embedded systems are an increasingly large part of that. So, keep on keeping on.

EW (01:04:43):

Thank you.

CW (01:04:43):

Thanks.

EW (01:04:43):

Our guest has been Jasper van Woudenberg, CTO of Riscure North America, and co-author of the Hardware Hacking Handbook.

CW (01:04:56):

Thanks, Jasper.

JW (01:04:58):

Yeah, thanks so much.

EW (01:04:59):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener Slack group for questions. And thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.

(01:05:13):

Now, a quote to leave you with. Let us go with Cory Doctorow, "Never underestimate the determination of a kid who is time-rich and cash-poor."