515: Script Boomers

Transcript from 515: Script Boomers with Nick Kartsioukas, Christopher White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Nick Kartsioukas. We are going to talk about- Well, we are going to talk about security.

CW (00:00:19):

I feel secure. Hello, Nick. Welcome.

NK (00:00:23):

Hello. I am in fact very insecure.

CW (00:00:23):

Oh. Well. I was lying. So.

NK (00:00:25):

There you go.

EW (00:00:27):

That is going to come up in a minute. So, remember this. Nick, could you tell us about yourself, as if we met at Supercon?

NK (00:00:36):

Sure. Hi, I am Nick, a security engineer, that plays with mostly infrastructure and network security stuff. With a background in Linux system administration and network engineering.

(00:00:50):

At home, I like to play with random electronics things, like ham radio and embedded systems and radio controlled aircraft. Way too many things that occupy my time.

EW (00:01:05):

We are going to do lightning round. But for this lightning round, we have a special request. We only want you to lie. We want you to go as fast as you can, but always lying. Are you ready?

NK (00:01:18):

Yes.

CW (00:01:18):

What was the first concert you attended?

NK (00:01:20):

"Badgers on Ice."

EW (00:01:22):

What is your oldest cousin's middle name?

NK (00:01:24):

Grover.

EW (00:01:24):

In what city or town did your parents meet?

NK (00:01:28):

Antarctica, in McMurdo Station.

EW (00:01:31):

What was <laugh> your childhood best friend's nickname?

NK (00:01:34):

Hey, you. Get off that chair.

CW (00:01:35):

What is your mother's maiden name?

NK (00:01:36):

Maiden.

EW (00:01:36):

<laugh> What city were you born in?

NK (00:01:44):

The City of Angels.

CW (00:01:45):

What is your name?

NK (00:01:47):

Do you have a warrant?

EW (00:01:49):

What is your quest?

NK (00:01:51):

To seek the Grail.

CW (00:01:52):

What is your favorite color?

NK (00:01:54):

Blue. No. Yellow!

EW (00:01:57):

What was your favorite food as a child?

NK (00:01:59):

Bananas.

CW (00:02:00):

Do you like to complete one project or start a dozen?

NK (00:02:03):

I complete every single project that I start.

CW (00:02:05):

<laugh>

EW (00:02:07):

Favorite fictional robot? This one is hard, is it not?

CW (00:02:12):

It is hard to lie about this. <laugh>

EW (00:02:13):

<laugh>

NK (00:02:15):

Yeah. The little rat robot thingy in "Star Wars." The one that bumped into...

EW (00:02:24):

The mouse droids?

NK (00:02:25):

Yes. The mouse droids.

CW (00:02:27):

Huh. Those may actually be my favorite. What is a tip everyone should know? <laugh>

EW (00:02:30):

<laugh>

NK (00:02:34):

Always run Telnet with no password as root.

CW (00:02:40):

The world might be a better place if we did that, actually. We should run that experiment.

EW (00:02:45):

Not in our house network, please. <laugh>

(00:02:50):

Okay. Lightning round is over. You no longer need to lie. Of course, you are welcome to.

CW (00:02:55):

But you are free to lie.

EW (00:02:57):

But it would be nice if you told us, if you were doing that. So were those good security questions, or bad ones?

NK (00:03:06):

As far as establishing identity type security questions?

CW (00:03:09):

Uh huh.

NK (00:03:12):

So a fun thing, whenever a website has me fill out those little security questions, I generate a 64 character random string, and paste it in, and then save it in my password manager. And hope that I never have to read that off to somebody on the phone.

CW (00:03:29):

Yep. <laugh>

NK (00:03:31):

But anything that is one of those questions, is usually easily found through social media or other easily searched sources.

EW (00:03:41):

I do not do the random string, but I do lie, and then put it in my 1Password. Because then I have what I said, for fear that someday I have to answer, "What was your best friend's nickname?" And I say, "Captain Underpants," and the answer was not that. Because that would be super embarrassing.

(00:03:59):

Okay. So, security. You are a security engineer. Senior Security Engineer is your title. But, what do you do every day? Or, what does a workday look like for you?

NK (00:04:14):

Oh. There is not really a workday that is a typical workday. A lot of what I do is helping internal users configure their services and make use of networks in a secure way.

(00:04:34):

People will come to me or my team and ask, "Hey, I want to stand up this service to do X. How do I do that, not insanely?" And we will provide advice, guidance, on how to do so.

(00:04:46):

We will review things that people want to connect to the network, or use to pass network traffic. We will review ACL requests for people that want to make connections out to the internet, or in from the internet. It is generally- It covers a lot, really.

(00:05:04):

My team was sort of the team of last resort on the security org. So whenever a developer or someone else would have a problem, that no other security team really had an answer for, they would come to us and be like, "Hey. Can you help us figure this out?" We would do so to the best of our ability, and usually provide some sort of paved path for them to go forward, as well as any other team that had similar questions.

EW (00:05:35):

Is it mostly about the people, about the product needs and guidelines, or about software versions and patches?

NK (00:05:48):

Yes. There is some of everything in there. So we will- Let us say somebody wants to connect a device to the network. We will ask them, "Okay. What is running on this device? What services does it have listening for network traffic? What versions of applications are running on it? What kernel version? What library versions are running on it? How does the vendor notify you of updates, if there are any to be had? How do you keep it up to date with vulnerability notices?" That sort of thing.

(00:06:27):

There is also the, "What are you using this for? What kind of data is it handling? What do we need to worry about, as far as authentication services that are on it? Is it encrypting its traffic?" It is sort of the wide gamut.

EW (00:06:45):

Is there a checklist I could go through as a developer of embedded devices?

NK (00:06:51):

I could write one for you.

EW (00:06:52):

Okay.

NK (00:06:54):

But I am not really- I do not know that there necessarily is one.

EW (00:07:00):

There used to be a good CERT one that I would consult, but it was like, "Check your inputs and make sure that they do not overrun," which is not really the class of problems anymore.

NK (00:07:12):

It is a class of problems, but- Unfortunately, those are still problems. But yeah, there are a lot more to keep in mind as well now. There is, let us see, the OWASP Foundation. What is it? I forget what it stands for. But they have an embedded security project that they are working on, for creating some guidance materials and documentation and such. But there is nothing complete for consumption yet.

(00:07:46):

National Science Foundation also has the Center for Hardware and Embedded System Security and Trust or CHEST. But again, it does not have a lot of complete documentation out.

(00:08:01):

There are also a bunch of various NIST standards related to secure things. Yeah, there is a lot of documentation out there. It is trying to find what is applicable to you. That is the challenge.

EW (00:08:19):

Yes. What is applicable to me? How do I pay for it? How long do I maintain it?

CW (00:08:26):

Well, what is your product vulnerable to? Right? You have to do some threat modeling. If it is an IoT- Well, not an IoT device. If it is an embedded device from 2004, you put the firmware on it and you ship it. Maybe there is a Baroque way to do a firmware update manually or something, but that was it.

EW (00:08:49):

With the right tool.

CW (00:08:50):

With the right tool.

EW (00:08:50):

When you were standing right there.

CW (00:08:51):

Right. Yeah. And then things have evolved, to where any small device in a organization or your house, could be as capable of communicating as any computer was, or any blade server was, 15 or 20 years ago.

(00:09:13):

So now you have to deal with, "Well. Can someone get into this thing remotely? Can somebody get into it, without being physically present? What does my supply chain look like if there is a firmware update? Can somebody hack my organization and get to a device 3,000 miles away, by changing-" It has become much more complicated.

NK (00:09:37):

If somebody discovers a vulnerability in a common library that you used, a particular version of it, then okay, what is that going to affect downstream from that library? All of the software compiled against it. That sort of stuff.

CW (00:09:54):

Right. Which happens all the time.

NK (00:09:55):

Yep.

EW (00:10:02):

<music> Before we dive back in, a quick note from our sponsor, RunSafe Security. If you are working with C or C++ in embedded systems, you know that security is always a balancing act. You want to protect your code, but you do not want to rewrite everything in Rust, or slow down performance. That is where RunSafe comes in.

(00:10:21):

RunSafe's platform helps engineers build safer, more reliable devices, by automatically generating SBOMs, identifying vulnerabilities early, and hardening software, without changing your development flow.

(00:10:36):

And here is the cool part. Their patented Load-time Function Randomization rearranges your code in memory every time it runs. So if a vulnerability exists, an attacker cannot predict where to strike. It is like giving your code a new set of armor every time it boots.

(00:10:58):

RunSafe works across aerospace, defense, automotive, and other industries where reliability is critical. But really, is not reliability always critical? If you are writing embedded code, it is worth taking a look.

(00:11:12):

You can learn more, and see how it works, at RunSafeSecurity.com/embeddedfm. That is RunSafeSecurity.com/embeddedfm. Thank you RunSafe Security for sponsoring this show. <music>

(00:11:24):

I was introduced to the term "SBOM" recently, which I was told means "Software Bill of Materials." But that does not make any sense, because Software Bill of Materials is a bunch of electrons.

CW (00:11:41):

<laugh>

NK (00:11:43):

But there is a- Well, I should say the idea is, you have a list of all of the software used in the chain of- Like, you start with your IDE, and then there are all the libraries that you have made use of. There is all of the externally produced and internally produced code, that goes into that application. There are your build systems. There are your deployment systems. And all of the versions of each of those.

(00:12:17):

Given that list, that is your Software Bill of Materials, used to produce a particular release. And then from that, you can link that against other sources of information, like the CVE database and say, "Oh, hey. We built this release with this particular Software Bill of Materials. This version of whatever this one thing was, has a vulnerability. So we are going to need to go back, update that, produce a new release, and push that out."

EW (00:12:52):

CVE?

NK (00:12:58):

I cannot remember what it stands for. Again, I am bad with acronyms. But they are vulnerability announcements, basically.

CW (00:13:07):

"Common Vulnerabilities and Exposures" is what the Google tells me.

NK (00:13:10):

Yeah. So that is going to be an announcement of a particular vulnerability, in a particular piece of software or library. It is going to have information on the likelihood of exploit, or the ease of exploit. What that exploit will provide or grant or do. If it is just like a denial of service, or if it is a remote code execution.

(00:13:35):

Usually there is a common vulnerability scoring system number attached to it, from zero to ten. Ten being the most hair on fire, running around, patching all the things. That will give you information on what that vulnerability is, how somebody exploits it, and potential mitigations if there are any.

EW (00:14:01):

So if I was a script kiddie, I would go here and- Is script kiddie still a thing?

CW (00:14:06):

Sure. I think they have grown up now. They are script... I do not know.

EW (00:14:11):

Script boomers?

CW (00:14:12):

Sure.

EW (00:14:15):

So if I was someone who was idly amusing myself, by hacking into systems.

CW (00:14:22):

By committing crimes.

EW (00:14:23):

By committing crimes, I could go to this database and look up things, and use this to see who was vulnerable? See who had not been patching things?

CW (00:14:34):

Well, this is the classic security through obscurity. If nobody knows there is a problem, then certainly it is safe, right? <laugh>

EW (00:14:40):

No.

NK (00:14:44):

So with this, you cannot really tell who is running a particular version of a library. But if you have access to a piece of software, you can usually poke at it and see, "All right. This is linked against whatever library, and it is this version. I know that this version has these vulnerabilities. So I am going to start trying to attack the application, knowing that. I will see if I can make any headway, using these library vulnerabilities or these software vulnerabilities, whatever is listed here."

(00:15:17):

Yeah, it is a risk that there is- People will use the information on the CVE, to try and exploit systems. In fact, that is usually also when you see a lot of attacks against, say Windows, when Patch Tuesday used to be a thing. Is they would release the patches, the quote unquote "bad guys" would go and grab those patches, disassemble them, see what the changes are, and figure out what vulnerabilities were being fixed, and then produce exploits against those.

EW (00:15:56):

Chris is right. You cannot not publish them.

NK (00:15:58):

Yep.

CW (00:16:00):

But you do have to- There does have to be some coordination. It cannot be... <laugh>

NK (00:16:05):

Yeah. Usually there is some sort of responsible disclosure process, where if a security researcher finds a vulnerability in something, they will contact the vendor or maintainer of whatever it is. They will provide their details and say, "All right. Here is the information. I am going to publish this in 90 days."

(00:16:25):

Or work with the vendor, if they have some other timelines that they want to work with or work towards. But usually 90 days is the standard. They will give the vendor the information. The vendor will produce fixes, start rolling those out. Then the stuff will get announced.

EW (00:16:47):

I really liked the concept of the SBOM, because I have always thought that if you are building something, you should be able to rebuild it.

CW (00:16:54):

Well. And the FDA. That was reminding me of what- For the FDA process, there was the software environment description, there were some other documents that all went into that.

EW (00:17:03):

Right. Yes. Software configuration control. Whatever that one was.

CW (00:17:04):

Right. It was basically that. Like, "This release has all this crap, and we used Excel version 5 to make this list, even." Right?

EW (00:17:12):

That was actually what I thought we were doing, when a company I was working with started it. And then they are like, "Oh, no, it is for security." And I am like, "Okay, I guess. But can we put the security things off to the side, so I can see the good stuff?" Sadly, a lot of my mindset is like that. The security complicates everything, in a way that I find it hard to do the engineering.

NK (00:17:46):

That is an unfortunate truth with a lot of security things. That is where it falls upon security teams to make that more transparent to other users within their organization.

(00:18:00):

So, working with dev teams, to find some standard processes. Or like I said, paved paths where, "All right. If I want to do this thing, this is the way to do it securely. If I source my software or my dependencies in this way. Or if I always use this base configuration to start from, for an internet facing service." Things like that.

(00:18:28):

But for smaller organizations, where you do not have a dedicated security team, or even a dedicated security person. Where security is just you have your one IT person, that is running around fixing printers, and configuring desk phones, and plugging things into the network.

(00:18:47):

And also, "Oh yeah, I need to make sure that our network is resilient to botnet attacks and all of that." It becomes very difficult. They do not even have a concept of what it takes for secure software development stuff.

EW (00:19:06):

I have been seeing a lot of how to get better at this through FDA documentation. They have "Cybersecurity in Medical Devices," is one of their newer documents. It is not a great read, let us be realistic about that.

(00:19:20):

But how do I- As somebody who tends to work in a small team, and does not have a security person she can go to while working on client projects, how do I make this part of the process, without overwhelming everybody else in my team? Are there baby steps I should be taking? Or is this an all or nothing thing?

NK (00:19:46):

Starting with the Software Bill of Materials, and keeping an eye on the CVE databases, is definitely a starting point.

(00:19:55):

I do not know off the top of my head of any. But there are tools that will scan the CVE database for you. If you put in, "These are the versions of these things that I have," it will throw an alert if something on that list comes up. So if you can feed it your Software Bill of Materials, it will go and make sure that stuff in the CVE database does not match up with anything in your list.

(00:20:25):

And just keeping an eye on- Let us see. There is the Cybersecurity and Infrastructure Security Agency, CISA, is a US government agency. But they provide a lot of releases about security issues and guidance.

(00:20:44):

A lot of what I have seen out of them recently, has been operations technology focused. Like, programmable logic controllers and other industrial controls. But they have got stuff on all kinds of security issues and resources on their site.

EW (00:21:04):

Okay. So I come to you with an idea, that I want to connect this robot to the internet through- I want to connect this robot through satphone or whatever to an AWS Cloud. From there, my team will look at it and be able to communicate some things back to my device. I mean basic IoT device. I guess I have described "basic IoT device."

CW (00:21:37):

Right.

EW (00:21:37):

Where is the S in IoT? <laugh> Where do I get started? Okay, so we have talked about SBOMs. That would take care of my hardware abstraction layer, and my compilers, and my APIs.

CW (00:21:56):

In this scenario, are you talking directly from a device over the internet to your robot? Or is there an intermediate server and infrastructure? Is there a rendezvous cloud thing?

EW (00:22:07):

I am on Amazon Cloud.

CW (00:22:07):

Okay.

NK (00:22:10):

In that instance- My team does not do as much with the application security side, so I would not have done a deep dive into the software running on the robot. But what I would look at would be, how is your robot authenticating itself to the cloud service? And how is it basically confirming that the cloud service endpoint that it is talking to, is a legitimate endpoint? Some ways you can do that are with mutual TLS authentication.

EW (00:22:43):

Does that happen in manufacturing?

NK (00:22:47):

The provisioning of the certificate can happen at manufacturing, or it can be an ongoing thing. I would recommend using some form of hardware root of trust, or secure element or secure enclave, on your embedded device, to store a private key. And then a certificate provision to identify that device, signed by whatever certificate authority you have trusted.

(00:23:17):

You can have an internal certificate authority, or an external CA, that is generally trusted. That device would then have a signed certificate saying, "I am this device." It is signed by somebody that you trust. It will present that to the server when it connects.

(00:23:35):

The server will look at that and say, "Ah, this is a trusted device." It will look at its certificate revocation list, say, "Okay. Good. This has not been revoked, so I trust this connection from you. It is identifying you as Robot A."

(00:23:51):

And then the robot, when it is grabbing the certificate from the server, will say, "Okay. This is a certificate that is provisioned for this host name. It is signed by this trusted certificate authority, which I trust to sign things. So I know that this is the server that it says it is."

(00:24:11):

And then you have this two-way authentication and trust mechanism, between your device and your cloud service. That is how I would say you should probably have your communication channel set up.

EW (00:24:27):

This is a per unit?

NK (00:24:29):

Yep.

CW (00:24:29):

Certificate.

EW (00:24:31):

Certificate, yeah.

NK (00:24:32):

Yeah. Each unit would have its own unique private key, and then its own certificate provisioned. I know with AWS, there are some services that they have to allow mass provisioning of devices for that. There is, I think it is the AWS Greengrass service. That lets you just kind of mass provision and mass manage fleets of devices.

EW (00:24:58):

That is a non-trivial problem. I guess this comes out of the times when one security key was actually okay for some number of units, because...

CW (00:25:09):

It was too hard to do anything else.

EW (00:25:10):

It was too hard to do anything else. We did not really have the hardware encryption tools that we do now.

(00:25:19):

I see a lot of people, they have their microcontroller to do everything they need to do in real time. Then they pop a Raspberry Pi on there and say, "Okay. You are the interface to the internet."

(00:25:33):

A part of me is appalled, because that seems like a good way to create a botnet. Part of me is happy, because the microcontroller does not have as many features for supporting security, and it seems like the Raspberry Pi has a lot more support for that sort of thing. Or am I just pushing my problems up to a computer I control less?

NK (00:25:58):

The Raspberry Pi is going to have a lot more capability. But it is also going to have a lot more complexity. So you have to find that balance of, "All right. What am I willing to take on, as far as management of this more complex device, that has a lot more software moving parts to it? Versus an embedded microcontroller to do my communication up to a remote host."

(00:26:24):

If you are doing- Basically, if you are not running a web server or API endpoint or things like that on your microcontroller, having it just talk out to a remote server. I think there are microcontrollers that have crypto engines in them. They have secure elements. They have hardware random number generators. They are pretty capable, as far as getting all those operations done.

(00:26:53):

But again, you are raising your hardware Bill of Materials cost. So it is all a trade-off.

EW (00:27:03):

The Raspberry Pi adds cost as well. Sometimes I do want to spend that on my processor. But I also perhaps naively feel like the Raspberry Pi, as a secure internet citizen, is more robust than whatever I hack together as an add-on to my tool.

CW (00:27:26):

Mm! Yeah. But you are dealing with Linux at that point, which your Software Bill of Materials is now no longer "my firmware." It is now the entire Linux ecosystem, and all the patching.

NK (00:27:37):

There is now Debian.

CW (00:27:38):

Patching that has to happen there. Suddenly you are open to vulnerabilities that are other people's problem, that you have to solve, I guess.

EW (00:27:48):

I am open to more vulnerabilities, because there are more people playing in this area.

CW (00:27:52):

Yeah.

NK (00:27:53):

And now your update process also becomes so much more to update. Do you want to do an immutable OS image, that is just sort of, "This is my snapshot in time of all of these packages, and this is what you have on your device"? And then send a new immutable image that replaces that? Or do you want to do updates in place of all the packages, as they roll out?

EW (00:28:15):

Okay, this is the other problem with security, is I mentioned I was on a satellite phone here.

CW (00:28:20):

Well wait <laugh>. Well then, you should probably not send Linux over that.

NK (00:28:24):

Probably not.

EW (00:28:25):

Exactly! Bytes matter.

CW (00:28:29):

Is the Raspberry Pi running off an SD card?

EW (00:28:32):

Well, yes, and we do have grad students going out to visit the units occasionally. So we can make them carry SD cards. That is lighter than everything else we are making them carry.

CW (00:28:40):

I just wondered if you knew the bad news about Raspberry Pis and SD cards.

NK (00:28:44):

That is where the immutable OS type thing comes in, where you can mount the SD card or your disc image read-only. Then you have a partition that is just for configuration data, that you remount as read-write when you want to make changes. And then you remount it again read-only, so that it does not get messed up when the Raspberry Pi inevitably gets turned off without a clean shutdown.

EW (00:29:10):

Okay. I missed something. So Raspberry Pis-

CW (00:29:12):

They burn out SD cards really fast.

NK (00:29:13):

Yeah, so as the SD card-

EW (00:29:16):

Oh. Yeah, we knew that.

NK (00:29:17):

Yeah, they have limited write cycles. But also if you reboot a Raspberry Pi, or pull power from it mid operation, without it having done a clean shutdown, there is the possibility of data corruption on the SD card. Which is usually unrecoverable, and will prevent the device from booting up next time.

EW (00:29:39):

But we have been doing this for a while. Are the Raspberry Pis just dying slowly and we just do not notice?

CW (00:29:46):

It is just a widespread complaint.

EW (00:29:49):

Orh! This is like ejecting the drive when you need to do USB.

CW (00:29:52):

A lot of people switch to NVMe, or some other drive, than from the SD cards.

NK (00:29:57):

Even USB drives have been found to be a bit more reliable than the SD cards. So, yeah.

EW (00:30:03):

No, I am sorry, we just shipped this product. We are not doing this. <laugh> I should not have used it as an example.

(00:30:08):

But going back to, many of the devices do have small pipelines. Security updates mean I need to spend more money updating those, because I need to go over these small expensive pipelines to the data. Where I might have a monitor that is slowly sending data. It is still an internet device, so I have to be able to update it now, including the internet side.

(00:30:38):

Basically I want you to say, "No. You can do X. It will be much easier." Go.

CW (00:30:44):

<laugh>

NK (00:30:45):

There are things you can do, depending again on your threat model. If you have a five ton robot arm that is swinging around pallets of products in a warehouse, that is going to be different than if you have a robot that is say in Antarctica, pecking at the snow to pull samples off of whatever has most recently fallen.

CW (00:31:12):

That is pretty close.

EW (00:31:13):

It is actually pretty close. <laugh>

NK (00:31:18):

Wow, okay. Well. Neat. It depends on the risks and threats associated with your device. So if you have something set up so that it is only making outbound connections, it is not accepting inbound connections from the internet at large, that reduces your risk surface or your threat surface greatly.

(00:31:42):

Now you are worried about, "Okay, what happens if there is an attacker within my communications chain?" So, a person in the middle type attack. Are there things that somebody can do against the traffic itself, to spoof traffic from a legit server? Can they modify data coming from or to my device?

(00:32:06):

And then if there is a vulnerability in say, your TLS library that is doing the mutual TLS auth, where it turns out it was not actually checking the trust chain of a certificate presented to it, then well, yep, that is a problem.

(00:32:26):

But again, it is going to depend on what happens if somebody does attack this device. What is the outcome going to be? What is the worst that can happen?

EW (00:32:36):

Well, that is the thing. The worst that could happen for my device, is that it does not get to do what it is supposed to. But that is from my perspective, of my goal for my device.

(00:32:49):

But when I think more of holistic ideas, probably something worse my device could do, would be become a botnet. Also communicating over my satphone. Not only running up huge bills, but causing problems for other people. I want to be able to say, "It would be a huge loss and expensive, for the device to just fail." But there are actually more costs that could happen.

(00:33:19):

Christopher is looking at me like I am not making sense.

CW (00:33:22):

No, I am thinking through. Right. That is your threat model like, "What can somebody do with this?" They could turn it into a botnet, or they could cost you a lot of money, or perhaps do some sort of ransom thing. Like, "We are going to-"

EW (00:33:39):

"We are going to hold your data-"

CW (00:33:40):

"We are going to transmit on the satellite phone, until you pay us a million bitcoin," or something. I do not know, but yeah.

EW (00:33:49):

While I want to have a systems perspective- Sometimes I definitely am participating in the whole software, from the web, through the cloud, down to the Raspberry Pi, and to the microcontroller.

(00:34:03):

As the person who manufactured the robots, and focused so much on the microcontroller, if I only do that and do not consider the possibility of a botnet, because my microcontrollers- Good luck. But a Raspberry Pi is actually standard enough, that people could make it a botnet.

(00:34:25):

Okay, so we have talked about threat modeling. I actually want to go back a little bit. Because we talked to Philip Koopman recently, about embedded AI safety. He drew some really interesting parallels between safety and security, to me. Safety and privacy are two things always communicate together.

(00:34:45):

But safety and security, I thought he was going to talk about how if it is not secure in a car, someone may come up to you and make your car go 80 miles an hour, and it will be scary and all that. But it was not that.

(00:35:01):

Let us see. Phil said, "Both safety and security deal with analogous concepts. They both involve identifying the way something can go wrong. For safety, this is a hazard analysis. For security, this is a vulnerability assessment. Assessing the risks presented by something possibly going wrong, risk analysis versus threat modeling. And implementing mitigations to reduce the risk to a desired level."

(00:35:33):

That parallel actually resonated more with me, because I think more about security. How do you think about it, Nick?

NK (00:35:39):

I think there is a lot of crossover. With security, it is not always a bad actor that could be causing something to happen. It could be a misconfiguration of something, that provides a level of access that is not supposed to be there.

(00:35:57):

For example, I think last year there was an issue with the Ubiquiti UniFi camera ecosystem, where for a short period of time, something got misconfigured and people could see other people's camera feeds. That was definitely a security issue. But it was not some bad actor hacking in and making everybody see each other's stuff. It was just a, "This was configured poorly, or misconfigured."

EW (00:36:27):

For everybody? How do you misconfigure everybody's?

NK (00:36:31):

If you misconfigure the platform itself-

EW (00:36:34):

Yeah. Okay, okay.

NK (00:36:36):

That provides access. Yeah.

EW (00:36:38):

"The cloud. Where you can make one mistake happen to everybody."

CW (00:36:42):

"Make one mistake into a million mistakes."

NK (00:36:44):

"It is all about scaling. We could scale mistakes like no one else."

(00:36:48):

Yeah, there is a lot of crossover. There are security issues, that also affect safety. There are safety issues, that also affect security. There are privacy issues, that affect safety. There are privacy issues, that affect security, and vice versa. I do throw privacy in there as well, because there is, yet again, a lot of crossover.

EW (00:37:10):

So security and safety usually have different people sitting at the table, because they have different processes. Hazard analysis versus vulnerability estimate. Privacy though, usually falls in with security. Or is there a separate set for that? A separate process for that?

NK (00:37:30):

Privacy will often fall under the purview of legal teams.

EW (00:37:36):

Ah, yes. Because that is what you want at your engineering meeting.

NK (00:37:41):

Yeah. My own goal when it comes to looking at privacy things, is I do not want to make Eva Galperin yell at me on Twitter, for something that I did. She is a privacy researcher with the Electronic Frontier Foundation.

(00:37:58):

So yeah, it is like the usual security issue thing, is I do not want to end up on the front page of the New York Times. My bar for knowing that I screwed up, is Eva yelling at me on social media.

EW (00:38:16):

It is weird where our pressure points are.

NK (00:38:19):

<laugh> Because I have seen the things that she has ranted about. I know if I have done something that she has ranted about, I have done something terribly, terribly wrong. I will feel bad about myself, and I will go and move into the woods in a cabin with no electricity.

CW (00:38:35):

<laugh>

EW (00:38:40):

What she rants about- What she ranted about five years ago- Security changes over time. The things that I did in 2015, are things that I could not do now. Because the threats are so much bigger, so much easier for other people to run.

CW (00:39:01):

Well, and the way we do things has changed.

EW (00:39:03):

Yes.

CW (00:39:04):

It used to be- It used to be- <laugh>

EW (00:39:05):

In my day.

CW (00:39:08):

Time was, if a company wanted to have a server presence on the internet, they had a server- Sometimes on their own campus! That the internet connected to, and that perhaps their devices-

EW (00:39:23):

With their own allocated IP address.

CW (00:39:24):

With their own IP address range or whatever, and that they managed. It was there and they were in charge of it. Or, beyond that, they would put their servers in various data centers. But they were their servers. The cloud, such as it was, was the company's hardware and stuff, that they managed.

(00:39:39):

Now, everything is outsourced to gigantic hyperscaler things that run everything, which is a completely different way to think about things.

NK (00:39:48):

But it is still just somebody else's computer.

CW (00:39:50):

It is somebody else's computer. Yes, yes.

EW (00:39:52):

But it is a monoculture now.

CW (00:39:55):

Or a tri culture. Yeah.

EW (00:39:58):

It makes it easier to attack multiple things.

CW (00:40:01):

Does it? I do not know if that it does.

EW (00:40:03):

Well, it assumes that the cloud providers are keeping up security.

CW (00:40:07):

Yeah. Yeah.

NK (00:40:09):

It depends on, I guess, the platform and how it is configured. If you are making use of, I do not know, some cloud storage service. Is the vulnerability going to be in the cloud storage service itself? Or is it going to be how you have configured your cloud storage?

(00:40:31):

There are, I guess different- I would not say there is more or less necessarily. It is just very different. I think that difference, and users not necessarily being accustomed to certain ways of doing things, presents some issues.

(00:40:53):

But it also I think falls upon the providers to give users a- If it means being a more restrictive environment by default, that somebody has to opt into a less secure configuration, I think that makes sense too.

(00:41:14):

So there is blame to go around. There are vulnerabilities to go around. There are configuration issues to go around. But it is not necessarily more or less than it has been.

(00:41:28):

Certainly we found more ways of attacking things on a fundamental level. Look at the Spectre and Meltdown speculative execution issues. Rowhammer, where you can actually detect and influence the contents of memory. All of these weird things, that are based on decisions we made in computing 20, 30 years ago.

EW (00:42:01):

I have heard of those words you used. I even remember trying to understand the methodology and implications. But I have not heard about them lately. Is it because they are so hard to implement? Or because there have been patches? Or because everybody is like me, and just sticks their head in the sand occasionally?

CW (00:42:23):

<laugh>

NK (00:42:26):

There are patches. There are new similar types of exploits being discovered. The attack difficulty is usually greater than just an endpoint on the internet. You usually have to have access to the machine, sometimes as a VM, sometimes as bare metal. But yeah, they are a lot more, I would say, academic in nature. But they are definitely out there.

EW (00:42:56):

Part of threat assessment or threat modeling, is figuring out how valuable this is to someone. Someone attacking my satellite phone connected IoT widget, did not- There are only a few in the world, and they cannot really do much. It is a data collection system. So the threat, while I would be sad, is not...

CW (00:43:26):

Why is that interesting to somebody?

EW (00:43:29):

Yes.

CW (00:43:29):

As a target. Yeah.

NK (00:43:34):

There is a paper that I like to cite, written by a security researcher named James Mickens, who is absolutely hilarious. The paper is called "This World of Ours." It is about threat modeling. What he boils down to is, your adversary is either Mossad or not-Mossad.

CW (00:43:55):

<laugh>

EW (00:43:55):

<laugh>

NK (00:43:55):

Where, if your adversary is a spurned ex-partner trying to log into your Gmail to see what you are writing, versus a nation state actor.

(00:44:12):

I add a third one into that mix, which is a security researcher who is bored on a weekend, and has nothing else to do and starts poking at things. Because they are going to be a lot more capable, than a random person just trying to get into your Gmail. But they are not going to have the resources of a nation state actor.

(00:44:39):

That is where I have run into interesting things talking to vendors that say, "Oh, hey, there is no way somebody could get this key off of our device file system."

(00:44:48):

And I tell them, "Look, a friend brought a router over to my house the other day. We desoldered the flash, threw it in a reader, and pulled all the contents off in an hour. Because we did not have anything else to do. This is with less than a hundred dollars worth of tools. Your average person is not going to do that. But there are people out there that will happily do that, and happily poke at your system."

EW (00:45:16):

It is those people who can then sell that vulnerability to your competitors, to other people who may not be state actors, but also may have larger pockets.

NK (00:45:31):

Yep. They can sell it. They can exploit it just for the heck of it, to see what they can do. They could responsibly disclose it. There are any number of things that can happen. But reality is they are out there, and they are poking at things.

EW (00:45:50):

It is not just you. I know of some students many years ago, who broke into other people's servers for the fun of it. Because it is a challenge, and it is just sitting there, and I am bored. Yes, the, "I am bored and I want to try out these skills. Can I even?" There is an educated attacker who does not have a lot of resources, who should be part of the threat modeling. I agree with you.

(00:46:20):

And then there should be- I do not know if script kiddies fall into that. But there are people who are learning about security and want to just practice. Maybe they are not after anything. Although once they get in, it is kind of fun. And now it is like, "Okay. Well, what can I do with this?"

NK (00:46:38):

Yep. "What other systems can I see from here? What data can I access?"

EW (00:46:44):

There is a feeling of righteousness like, "These people should have done better. So therefore it is okay if I post it to the internet." Security is hard. How do I make it easier, Nick? <sigh>

CW (00:46:53):

Do not make something that you have to worry about it. <laugh>

NK (00:46:58):

Yes. Reduce your attack surface, as much as possible.

CW (00:47:02):

That is kind of a question I have, though. When we were talking about the evolution of stuff moving to the cloud, I was flippantly going to ask, "Should we be considering moving back to the other model, where people have more control over their internet presence?" But it does not sound like that is actually a solution.

EW (00:47:18):

Read-only memory in small devices that are programmed in the factory.

CW (00:47:24):

Once.

EW (00:47:24):

Once.

CW (00:47:24):

I do not know. I do think that there is a tendency to just add in internet features excessively, those kinds of things, firmware update, all that stuff, just by default now. Because that is the way things are done. Without considering, for this product or this thing, is that a necessity? Or could we do something simpler? Take a little bit of extra time?

EW (00:47:48):

Necessity. Is it worth the associated risks?

CW (00:47:51):

Right, right. Right. Right. Yeah. I am not sure that was a question. <laugh> But feel free to comment on it.

EW (00:47:57):

<laugh>

NK (00:48:01):

Yeah. Adding internet connectivity and app connectivity to all the things, is definitely a pattern we are seeing a lot of. As someone who has anything that is a smart device in their house is on an isolated network that cannot talk to anything else, or is only talking to local resources, it annoys me greatly.

(00:48:29):

What was it? The technology enthusiast says, "Yes, everything in my house is smart and the latest stuff." And the technology worker says, "The only thing smart in my house is a ten-year-old printer. And I keep a gun next to it, in case it makes a noise I do not recognize."

EW (00:48:46):

<laugh> I mean either, I.. Yes. <sigh>

NK (00:48:53):

I just updated the firmware on my printer. You know how it did its firmware update?

CW (00:49:01):

I am going to say, "USB stick."

NK (00:49:03):

It was a print job.

CW (00:49:03):

What? Wow!

EW (00:49:07):

So you are telling me that a print job can be used to- Wow.

NK (00:49:11):

It had a series of PCL commands, and then a huge binary blob, with some signatures at the end. That it sent to my print spooler, which then sent on to my printer, as if it was printing something.

CW (00:49:23):

I cannot think of any problems with that.

EW (00:49:24):

On the other hand, it is really cool.

CW (00:49:27):

That is the sort of solution they came up with, when they were at the end of- <laugh> "Oh my God, I forgot to do the firmware update, and this thing- How are we going to do it?"

EW (00:49:36):

"Are we going to have it be an access point?" "No, because users cannot understand that. They are just too silly."

CW (00:49:41):

"Maybe we will have a special toner cartridge?" "No." <laugh>

NK (00:49:45):

Yeah, I figured it would have had some sort of other- It is a network printer, so I figured, "Okay. It would probably have a webpage where you go and you say, 'Upload firmware here.'" Or it has got a USB ports, so you can stick in a USB stick. Nope.

EW (00:50:00):

Well, those were what the engineers argued for. In the end, somebody said, "We need to make this simpler, so that mom and pop can update their printers without having a hassle. So what is the way to make this the simplest for the user?"

NK (00:50:15):

Yep. "And what do I do for the non-network printers, that do not have a USB port?" So going with the lowest common denominator of product, and just using that same technique across all of them, so that you do not have to maintain a bunch of different types of deployment mechanisms.

EW (00:50:31):

But is that so bad, if they had signatures and encryption, and you never wanted to actually print that document?

NK (00:50:40):

With it properly signed? No, I do not think it is bad necessarily. It is just amusing.

EW (00:50:44):

Yeah.

NK (00:50:44):

But if I were to go and start digging into the printer firmware and see, "All right. How does it do its signature validation? What are the signatures? Is it just doing like an MD5 based signature algorithm on this? If so, that is bad."

EW (00:51:03):

It is a just a checksum.

(00:51:03):

They were on the show.

CW (00:51:04):

It reminds me of an IoT platform a while ago. This was back when it was difficult to get credentials for your wireless or whatever onto an IoT device. I think, did they not have something where they had an app, and your phone flashed its screen? And it had a photo detector, and it kind of Morse-coded the thing. Yeah.

EW (00:51:24):

They were on the show.

NK (00:51:25):

There was an old, old smartwatch that did that. You would hold it to your screen, and your screen would flash to send your calendar info to your watch.

(00:51:34):

My refrigerator. The way it communicates with an app, is it has a little piezo speaker.

CW (00:51:42):

<laugh> No!

NK (00:51:43):

Sings tones and my phone listens and understands what it is saying.

CW (00:51:53):

So we should just go back to modems, is what you are telling me.

NK (00:51:57):

Yep. What we have done, always has been.

CW (00:52:00):

<laugh>

EW (00:52:01):

I always like it when people are like, "Why are you still using your serial port?" I am like, "A serial port will never die." Did not really believe that about modems, but it is true.

CW (00:52:10):

Hey, musical instruments are still using MIDI, which was defined somewhere in the seventies, and has not changed much.

EW (00:52:15):

Going back to Koopman. One of the things he said was that safety relevant features are often about what engineers consider implausible. "A nut that is rated for this weight is never going to fail, given I am only putting one tenth of that weight on here."

(00:52:40):

But safety relevant happened when attackers successfully violate the plausibility assumptions. So in safety you are like, "Okay, trying to make sure that everything is plausibly safe."

(00:52:56):

But with security, because an attacker may cause a failure, may cause a failure that was specifically implausible given the normal run of things- I do not know what my question was. Nick, could you answer it?

NK (00:53:17):

Again, I think there is crossover. I think with the safety aspect, like looking at a bolt that it is implausible that it will fail. It is a bit macabre, but if you look at a lot of NTSB reports, the National Transportation Safety Board, there are a lot of equipment failures involved where-

(00:53:41):

Any of these accidents with aircraft for example, that it is not a controlled flight into terrain, or other type of accident where you have, "This part experienced stress over time and broke."

(00:54:00):

It was designed to handle the loads. But for some reason there was greater than expected load applied to it, and it stressed and sheared where something happened.

(00:54:12):

The same thing can be said with security. It is like, "All right. We have a device that has no network listeners, so it is implausible that anybody could access this device remotely." "Well, what if somebody is on the network, and they intercept the traffic, and make themselves look like the cloud endpoint that the device is talking to?"

(00:54:37):

If there is something in the way the device communicates, where it is not doing that proper certificate check or things like that, now you are the thing that the device is talking to and you can tell it to do stuff, as the cloud service would have. That is something that is far outside the normal operating parameters of the device. But it is something that can happen.

EW (00:55:07):

It is the orchestration, the malicious orchestration of intended failure. It is people causing the failures.

NK (00:55:19):

Yeah. That is where you have to- As I tell people, you need to think like a jerk.

EW (00:55:25):

Yeah. So, I can come up with a decent safety plan. But then if I really come up with a security plan, I cannot really make anything <laugh>. Sorry. I do not want to feel hopeless around security, because it has gotten so much better.

CW (00:55:42):

I think at the root of it, you do not want it to be your job. Right? You want to make the thing, that does the thing.

EW (00:55:49):

I prefer to make the thing. Yes.

CW (00:55:50):

It is like, "Okay. I have to make the thing. But I also have to do all of this homework."

EW (00:55:55):

I do not mind doing the homework, if I know that there is an answer. But security moves so fast, that whatever answer I find...

CW (00:56:04):

I was not accusing you of anything. I am headed toward, should- I have not worked in a company yet, where there was anybody dedicated to thinking about security issues on the firmware side. It was always a, "Everyone should be thinking about this."

EW (00:56:20):

But also, "Do not bill any time towards it." And, "We are not going to add-"

CW (00:56:24):

Well, not even that, necessarily. But it was always an after- It has always been an afterthought.

EW (00:56:26):

It is always an afterthought.

CW (00:56:27):

Without- Have we reached the time, the moment where, yes, firmware teams need security people? That is their job.

EW (00:56:36):

Do you need a tools person before a security person?

NK (00:56:39):

Tools, as in?

EW (00:56:40):

Someone to manage a unit test server and improve...

CW (00:56:47):

I think those are different.

EW (00:56:48):

They are different skill sets.

CW (00:56:49):

Yeah.

EW (00:56:50):

But given a random embedded team, they are both needed. As you scale up, usually one of the things you get, is somebody who is more focused on the tools than on the application. And you get somebody who maybe is more focused on the security than the application.

(00:57:10):

Or, for AI systems, you end up with the application, and then you end up with the embedded software engineer who is more responsible for the inference. I guess my question is, do I need a security person first? Or do I need a tools person first? Given my mythical, but IoT connected, gadget.

NK (00:57:32):

I do not know what the answer is. But I imagine there is some inflection point, at which it makes more sense to bring in dedicated resources for your CI/CD pipeline, your unit test pipeline, your security architecture reviews. Versus relying on your firmware engineers to just be multidisciplinary in nature.

(00:57:59):

That inflection point probably happens a lot sooner, than companies are willing to put money into that.

EW (00:58:09):

Oh, much sooner. It is often shared between teams, because it is not a full-time job usually. It is, once you get CI/CD set up, now you can bring it to other teams. Single teams think they do not need it, but that is not always true.

NK (00:58:27):

Yeah. With security, same thing. If you have one security person dedicated to a team of four firmware engineers, there is probably going to be a lot that they are not doing, a lot of the time.

(00:58:39):

Whereas with a larger organization, you can have an application security team, that takes in requests from a lot of different development teams. Reviews their software, helps them with threat modeling, that sort of thing.

(00:58:55):

Probably something that can be done to help with this, is looking at existing companies and services, that provide some of the IoT connectivity portion. Like Golioth. Like AWS with Greengrass and IoT. Various other vendors.

(00:59:20):

Relying on a company that does just this connectivity portion, and has a set of really good documentation on, "This is how you implement this properly. These are the libraries you use. These are the application components we provide." Then you rely on them for, if they need to send an update, you say, "Okay, I can patch that into my firmware."

(00:59:50):

And then you do not have to think as much about the network connectivity and network exposure side of things.

(00:59:59):

But again, being able to make sure you have the things set up properly. And know what questions to ask the company, to ensure that they are not just selling snake oil. Need to know security stuff, or have a security person to lean on for that.

EW (01:00:20):

I bet there are consultants that would help you. Not me, but- Which reminds me, Nick, are you looking for a job?

NK (01:00:28):

Yes, I am looking for a job, currently. I have done work in network and infrastructure security, both on prem and cloud. And have a great interest in embedded system security, or the lack thereof, as we have currently been discussing.

EW (01:00:47):

You have mostly worked for larger companies lately. Do you think larger is better for this sort of thing? Or, because it gives you more time to focus on the larger pictures? Or, are you interested in something more tactical, and getting things shipped, but also securely?

NK (01:01:08):

I have worked for both very large and very small companies. It is definitely different, but I kind of enjoy each of the challenges uniquely. So it is fun to be in a small place, where you really have to maximize resource utilization.

(01:01:29):

Then, a giant company, you get to talk to a whole bunch of different teams, doing a whole bunch of different stuff. But there is always a spot for interesting things to be looked at. Which is to say, I have no preference of a giant or a tiny company.

CW (01:01:49):

I wanted to ask, we have talked about a lot of things, but most of us as former developers come- Well some of us come from a background of not eating and breathing security for our careers.

(01:02:00):

What are some good ways to come up to speed? Things that you would recommend as ways to learn. But also, how to keep up with stuff? Because we are always keeping up with our own technology changes. But security is something that we probably should be at least aware of, too.

NK (01:02:19):

Yep. I am going to take a sec to rant about some of the cybersecurity degree programs that I have seen. For anybody in such a program, please go out and learn about the fundamentals of systems and software and computing operation.

(01:02:40):

I cannot count the number of graduates from such programs that I have interviewed, that do not know how a network works at a basic level. That is really unfortunate, because for a lot of security things, you really do need to understand basis of what you are looking at. It is not just about running Kali Linux, and running Nmap scans from that. It is about understanding what those results are telling you, and why.

(01:03:12):

That said, good ways to learn? There is- What is it? MIT has a document that is the missing semester of your CS education.

CW (01:03:27):

Oh.

NK (01:03:29):

It has a bit about networking. It has a bit about security. It has a bit about running version control systems.

CW (01:03:40):

That sounds like a good thing to disseminate.

NK (01:03:42):

Yeah, it is a good thing to provide a foundation on. Just a little bit of like, "All right, how do you interact with a computer? How do computers interact with each other?"

(01:03:54):

There are a lot of capture the flag exercises, where you are presented with a system and a challenge. Like, "Here you have this EC2 instance running, with a web server that does this. Try and get the root password from the system. Or try and get it to execute something that it should not." And then you try and perform various attacks.

(01:04:24):

The good ones will provide some kind of hints and guidance along the way. There is one going on right now, that is the SANS Holiday Hack Challenge. That is an exceptional program, that has a bunch of talk tracks in it. Those are related to a lot of the challenges that are being presented.

(01:04:50):

Each of the challenges will have basically built upon a set of prior challenges. So as you go through it, you get more and more advanced techniques and things that you are doing. It is a very approachable system.

(01:05:05):

There is even one dedicated to embedded systems, called "Microcorruption". Where the challenge is, "Oh hey. You need to craft a Bluetooth packet to make this lock open." Which obviously being in a web browser, it is all emulated, so you are not actually like- They do not ship you a lock to sit on your desk.

(01:05:27):

But it is the same sort of techniques, where you would pull the firmware off a device, which they then give to you. You step through it with a debugger. You figure out, "All right, what sort of inputs provide what output?"

(01:05:39):

I recommend checking out CTFs. Those are a lot of fun. They can also be cool to see, "All right, what is an attacker going to do, if they are wanting to get at my device?"

CW (01:05:50):

Yes.

EW (01:05:52):

It is good practice. Cool.

(01:05:55):

Nick, do you have any thoughts you would like to leave us with?

NK (01:05:59):

Yeah, I am going to jump away from computer security for a minute, and go into a bit of more personal security and safety. You all, it is getting weird out there.

CW (01:06:13):

<laugh>

NK (01:06:13):

Get to know your neighbors.

CW (01:06:18):

That is always a good idea.

NK (01:06:20):

Yeah. Get out there and see if you can find a "Stop the Bleed" class. Those are really cool. Accidents can happen anywhere in the kitchen, garage, whatever. Bleeding emergencies can go very bad very fast. So learn how to apply a tourniquet. Keep a little emergency kit around somewhere. Just, yeah, be careful, keep safe. As my friend Mark would say, "Be good humans."

EW (01:06:50):

Does this go back- I know you had EMT training at one point. Years ago?

NK (01:06:55):

Yes.

EW (01:06:55):

Do you keep that up?

NK (01:06:59):

I had to let it lapse when I moved to Washington. They did not have reciprocity with California, and I did not have a national cert at the time. Washington required you to have affiliation with an emergency service provider.

EW (01:07:13):

Oh. Blargh.

NK (01:07:16):

So, I was not able to keep that up. But I do try to keep up with some general basic first aid and trauma treatment stuff.

(01:07:23):

Interestingly, when I was an EMT, the guidance for tourniquets was, "Never do this. You will make the person's limb fall off." And now the guidance based on a lot of battlefield medicine, has like ten or 15 years later come into emergency services and general use of, "Throw a tourniquet on it, if you cannot get it to stop bleeding. TQ and get them to the hospital."

EW (01:07:54):

Medical advancements in undoing the damage done by a tourniquet have improved amazingly. And also we figured out that bleeding out- If that is the only other option. We have a Community Emergency Response Team, that-

NK (01:08:14):

Oh, CERT training is great.

EW (01:08:15):

Yeah. You learn basics of what to do in your community if there is an emergency. It is a ten or eight or 12 week course, that is one night a week. It was really cool.

(01:08:28):

Knowing where the Emergency Response Teams gather. Knowing what they will take from innocent bystanders. Like, do not bring your lasagna. They cannot eat it. If you have packaged granola, they can eat that.

(01:08:46):

And of course then, all of the first aid. It is just really useful. Also, I agree with Nick, know your neighbors. You do not have to love them. You do not have to have dinner with them all the time. But opening your yard and having a "anyone who wants to come by and just say, 'Hello,'" is a good idea. It is really helpful.

NK (01:09:12):

Power outage, go knock on a few doors, check on them. If you know you have an elderly or compromised neighbor, if it is a heat wave or a cold snap, go say, "Hi." See if they are okay.

EW (01:09:25):

You do not have to be a bother. You can just say, "I was thinking of you."

NK (01:09:29):

Yep.

EW (01:09:31):

Our guest has been Nick Kartsioukas, Senior Security Engineer. You can find him as "Exploding Lemur" on socials like Bluesky and Mastodon. And of course, check out his LinkedIn profile, if you are looking for a senior security engineer.

CW (01:09:48):

Thanks Nick.

EW (01:09:50):

Thank you to Christopher for producing and hosting. Thank you to RunSafe for sponsoring this show. And thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.

(01:10:04):

And now a quote to leave you with. This one is from Malala. "If we want to achieve our goal, then let us empower ourselves with the weapon of knowledge, and let us shield ourselves with unity and togetherness."