207: I Love My Robot Monkey Head

Transcript from 207: I Love My Robot Monkey Head with Ayanna Howard, Elecia White, and Christopher White.


EW (00:00:06):

Welcome to Embedded. I'm Elecia White. My co-host is Christopher White. You know, I currently have a crush on all things robotics, right? Imagine how pleased I am to have Professor Ayanna Howard, with us to talk about that very subject. Before we talk to Ayanna, I want to mention that our friends at Hackaday, having heard about my robot arm obsession are offering a discount for the MeArm. That's the little cheap arm that I'm using for my typing robot. The coupon code is awesomeembedded all as one word. You can of course, email me to get that if you don't remember, but yes. Let everybody make robot arms do incredible things.

CW (00:00:53):

Hello, Ayanna, it's nice to talk to you today.

AH (00:00:56):

Thank you. I'm pretty excited about being here.

EW (00:01:00):

Could you tell us a bit about yourself?

AH (00:01:02):

I'm Ayanna Howard, At the end of the day, I call myself a roboticist. My function right now is I'm a professor of the School of Electrical and Computer Engineering at Georgia Tech.

EW (00:01:17):

And what is your favorite class that you teach there?

AH (00:01:20):

Oh, so actually my favorite class right now is Bioethics and Biotechnology, which is totally different than my normal.

EW (00:01:30):

Alright.

CW (00:01:30):

That sounds fascinating.

EW (00:01:31):

It does. We should ask more about that. Before we get down into the details, I want to play the game lightning round, where we ask you short questions and we want short answers. And if we are behaving ourselves, we don't ask why and how, and can you tell us more until the end?

AH (00:01:49):

Okay.

CW (00:01:50):

Favorite movie or book, fiction, which you encountered for the first time last year or in the last, in the last year?

AH (00:01:58):

In the last year? Favorite movie would probably be... oh, the last year.... Oh, see, I'm not being quick on this. I have to think of-

CW (00:02:08):

It's ok.

EW (00:02:09):

It's not like there's actual rules here.

CW (00:02:12):

You won't be struck by lightning.

AH (00:02:13):

Yeah. Cause I've seen a lot. I mean, I like the... I will say the last Star Trek movie, but I don't know if that came out in the last year.

CW (00:02:22):

Close, close enough.

AH (00:02:22):

It was like July 4th, maybe. Okay.

EW (00:02:26):

Alright. This is Force Awakens, not Rogue One.

CW (00:02:28):

No, Star Trek, Star Trek.

EW (00:02:29):

Oh, oh.

CW (00:02:31):

Geez. You're off the podcast.

AH (00:02:35):

Yeah. What did it start, that came out like a year ago, like July 4th I think?

CW (00:02:38):

Yeah, yeah.

EW (00:02:38):

Right.

AH (00:02:38):

Okay.

EW (00:02:38):

Favorite fictional robot.

AH (00:02:45):

Rosie.

CW (00:02:46):

Preferred voltage.

AH (00:02:48):

Oh, five volts.

EW (00:02:50):

Wheeled robots or arms?

AH (00:02:52):

Arms.

CW (00:02:57):

Oh, sorry.

EW (00:02:59):

Are you playing, Chris?

CW (00:03:00):

It was such a lightning answer that I was expecting more.

AH (00:03:06):

I responded quickly that time.

CW (00:03:07):

You did, you did, you did.

AH (00:03:09):

It caught you off guard.

CW (00:03:09):

You caught me off guard. A technical tip you think everyone should know?

AH (00:03:12):

Coding is fun. I don't know if that's a tip, but..

EW (00:03:16):

Alright. If you could only do one, would you choose research or teaching?

AH (00:03:21):

Research.

EW (00:03:23):

Okay. Tell me about your research. This doesn't have to be short.

AH (00:03:26):

Okay. So, I guess my favorite body of research right now is designing robots to engage with children with special needs in the home environment, primarily for therapy and some with respect to education. I love it because I'm a lifelong learner. So one of the things I have to actually learn is about behaviors and people and you know, how children learn and how they react to robots.

AH (00:03:52):

So I'm excited about that, 'cause it's just new territory that I haven't explored before. So I'm learning as I'm going. And then the reward of having a robot, interacting with a child who might not have been exposed to this type of technology before is just, it's so amazing. It's like, you know, opening up that Christmas gift, you know, every year it's like, "Oh, what is it?" And so I get that kind of really psyched reaction when I'm in doing this type of research,

EW (00:04:18):

The children, are the children excited because it's a robot or is it because they have special needs and somebody is finally patient enough?

AH (00:04:27):

I think it's a combination. I think one it's all kids are engaged by robots.

EW (00:04:32):

Yeah!

AH (00:04:32):

In fact, a lot of adults are too, right? So that's like, oh, that's a gimme. But I think it's also having that access, so what happens is a lot of technology, as I say, and a lot of technology is not accessible. It's not made for everyone, it's only made for a certain slot of folks. And so I think just having access to something that's different and it doesn't necessarily, it's not just because it's a robot, it's just that it's something different that, "Hey, I've heard about this kind of thing and this stuff and robotics". So I think it's that access as well.

EW (00:05:05):

I can see that. Do the kids interact with it, like they would, uh, I've said this before, Big Hero 6's Baymax where it's a cuddly thing and they interact with it as though it was a person or is it, it's a robot and they're interacting with it, I don't know, as a coding or a logic sort of way?

AH (00:05:27):

So they're interacting with it as if it's a playmate. Even though the robots we use are not cuddly. I mean, they're definitely, they're still made out of metal, but they're interacting with it as if it has a personality. I mean, because we program the robots with personality, so maybe that's a given. But they act as if it's a playmate, like it's alive and it understands them and it can react based on, you know, their current state. It's like a person, it's like a live being.

CW (00:05:54):

What are the robots that you're using for this research look like? Are they anthropomorphic or..?

AH (00:06:03):

They are, they're humanoids about the size of a toddler. So not that big. We typically have them on the table next to the child. They have arms, they have legs. We don't really make it walk necessarily. We, most of our interaction is with the hands or with the arms or with the feet. But more of like, if you think about being happy, you kind of go up and down, like bounce. And so we use legs that way, not necessarily walking, but to exhibit, you know, an emotional happy state, for example,

EW (00:06:39):

I don't know whether to go into the emotional, happy state or all of these joints and how they work together.

AH (00:06:44):

Okay. Do you want to?

EW (00:06:47):

Let's go with the joints because, I mean, so that isn't easy. Getting a robot to move multiple joints, I am learning, is a little difficult.

AH (00:06:57):

And not fall over?

EW (00:06:59):

Yes, no, there's been some falling over, but getting it all to work together so that it creates a physical embodiment of an emotion. What are the technical things I would need to know to build something like that?

AH (00:07:15):

So one of the things we do before we even start coding the robot is we create a kinematic model of the robot. So we look at, and if you think about the joints and the links between joints, we pair them up. So think of it as, um, I would think of it as a stick figure. So remember when you were young and you wrote, you drew these stick figures with like little balls for the hands and little balls for the elbows and little balls for the shoulders, and you had these sticks that would go in between. So that's basically a kinematic chain, but those equations associate with it. So we create that kinematic chain for the robotic system and we compute the math in terms of, okay, we want to go from point A. What does that look like in terms of your joint angles?

AH (00:07:58):

We want to go to point B, what does that look like for the joint angles? And so what is the trajectory to go from A to B, without say, getting into a singularity or falling over or going into some kind of weird contraption like, yeah, that just doesn't look right. So that's kind of the math behind it, but that doesn't actually always work because then in the real world, you know, we don't incorporate things like, you know, friction and the fact that, you know, the air actually applies some type of drafts sometimes. In the real world, we take that as the model is just more of a, I would say a very kind indication of what it should do. And then we start tweaking it based on the real world.

CW (00:08:39):

So what level of mathematics does someone need to know, to understand that, is this something that's okay, you've got to be really, really good in trigonometry, but you don't necessarily have to know a lot of physics or is it, you've got to know all of that stuff.

AH (00:08:51):

I would say there's elements of it. If you had, and I was to say, if you had basic calculus, because calculus has some of these elements that you see again in physics, as well as algebra two, as trigonometry. So I would say basic understanding of calculus, calc one. You can do this math. If you get more advanced, it becomes easier in terms of thinking about it. And then you can add things like, you know, what happens if I add gravitational effects to it and what happens if I add frictional effects to it. But at the basic level, you know, calc one is good.

EW (00:09:24):

Yes. But you have to remember that the law of cosines is a thing, because I did not remember that.

CW (00:09:29):

I didn't get that, I didn't get that in calc one.

EW (00:09:31):

It wasn't just Pythagorean theorem and then trigonometry, you had to have a little more of that.

CW (00:09:34):

I didn't get law of cosines until later. Yeah.

AH (00:09:38):

Well, so the thing about this and this is my, I would say my beef about education a lot of times is that this math is really useful. It really is. But the whole thing is that you don't learn as application. So it's like, it's like, what is this? Why am I doing this? Why am I inversing? I understand nothing. But then when you're doing it in the real world with robotics, you're like, "Oh, now I understand why this doesn't quite work."

CW (00:10:02):

Yes.

AH (00:10:02):

"Now I understand why you can divide by zero. Oh, get it, got it. I'm good." That's why I love robotics. It really teaches you the value of all that stuff you learned in terms of a theory world.

EW (00:10:15):

It really puts a lot of stuff together. I mean, there's, there's the software, there's the math, there's the physics, the electronics, the mechanical, it just, yes, this is, robotics is awesome.

AH (00:10:25):

It is.

CW (00:10:25):

And if you put cameras on, you've got optical.

EW (00:10:27):

And machine vision and machine learning.

AH (00:10:29):

You've got machine vision and learning, and even cognitive science and the social sciences as well.

EW (00:10:36):

Kinematic models. I have done a kinematic model for my arm..

CW (00:10:41):

Here comes the do my homework question.

AH (00:10:44):

Alright.

EW (00:10:45):

And I did it using the robot operating system, which I'm very new to, but it had a kinematic modeler. And Chris is right. I have a bug in my device 'cause I don't understand transmission linkages. And if anybody wants to solve that for me, I'll put it in the show notes, but I'll put a link in the show notes, but what do you use for kinematic model?

AH (00:11:08):

We use something called MATLAB.

EW (00:11:10):

Oh, I see.

CW (00:11:11):

Really, okay.

AH (00:11:14):

Because we don't, so the way we code, we again, the MATLAB is really to do, we call back-of-the-envelope calculations. We don't use the code that comes from that directly on the robotic system. We then program our own methodologies. Sometimes we use ROS, depending on what we want to do. Sometimes we just start from scratch. Sometimes if we're using a commercial robot, for example, they'll have their own language already done. And so we'll use whatever coding language they have and their libraries. ROS is one of them, but is one of many.

EW (00:11:51):

Is it a good one?

AH (00:11:55):

Given that I know some of the folks that work and started that...

EW (00:12:00):

And this is public, I think the only answer is yes. It's great.

AH (00:12:02):

Yes. You know what though? I would say the one thing that ROS did that was amazing was that it is a language that a lot of people can use. And so I might not know vision, for example, like, "Oh, what am I supposed to do? How am I supposed to do vision? Or how am I supposed to do the SLAM thing?" And the modules are there. And so if you want to be an expert in, you know, I want to be an expert in controls. I don't really want to learn perception and vision processing, it's okay. And ROS allows you to do that. And so, because it is so powerful, it allows you to pick and choose. The learning curve is quite steep, but it allows you so much flexibility.

AH (00:12:45):

And the community, the community now is amazing in terms of the ability to share. Like, if I do something, it's like, "Oh yeah, I just did this and it was really hard. It took me, you know, six months to do, and I'm going to share it and I'm going to publish it so that other people don't have to go through the pain that I did." And so the community is amazing in terms of sharing and things like that. And so all of those things make it an amazing package, an amazing framework and an infrastructure that, you know, it was a while to get to. But those are the good things about it.

EW (00:13:18):

And the bad things mostly involve documentation and a huge pile of information that you can't quite shove in fast enough.

AH (00:13:25):

It's the documentation that's the learning curve.

EW (00:13:27):

Yeah. So which of your robots is your favorite?

AH (00:13:31):

So my favorite robot right now is, we're moving to it. It used to be Darwin, which is this humanoid robot. We're now starting to use the Nao, which is also a robot by, is a robot from Aldebaran who was bought by SoftBank. But as a humanoid robot, the reason why I like it is that its articulation is really nice in terms of its movement. Like when it moves, it's like, "Oh, that's such a beautiful dance that it has". Which is why I like it. It just, you see it and you like, every time it moves people just smile. It's like even people who don't like robots, they just kind of smile like, "Oh, that's kind of beautiful".

EW (00:14:15):

Okay. So how does it, how do you get motion like that? As opposed to the jerky motion that makes people look at it and go, that looks painful.

AH (00:14:26):

So that's the thing, it's the, and I truly believe it's whatever motors that they're using, this, and I forgot which one, but it has much more of a continuous signature and then the ability to access those motor commands and program them. I think it's just the way that it's designed. It allows this fluidity.

EW (00:14:48):

What are you going to do with it? I mean, if it's got motors that move all nice and stuff, and you already know how to make it so that robots can play with kids, what's left?

AH (00:14:58):

Well, that's what it's being used for -

CW (00:14:59):

The research is left.

EW (00:14:59):

Giant magic box in the middle.

AH (00:15:03):

It's being used for playing with kids. So it has the same function. It's just, now I can do a lot more nuances. So I noticed, so the kids are happy with the behaviors that we've programmed in terms of its motions and things like that. But I can now do a little bit of nuances. So as an example, the robot I was using before, couldn't really cock its head. Do you know, cocking your head is actually a very, very nice motion. It says so much about what you're thinking -

EW (00:15:36):

Yeah, it does.

AH (00:15:36):

You know, so something that's like, you're like "oh, what?" It's like, yeah. Something like that, just one thing like that means that all I have to do is cock my head and I can either look curious or I can look angry or I can, and I don't even have to move my body.

EW (00:15:49):

And the new robot does this, but the old robot does not. What is the name of the new robot, you said Nao?

AH (00:15:54):

It's Nao. Yeah. Nao. So it's been, both of these have been out for a while. I just inherited the Nao, it's a more expensive platform. And so I just inherited, I'm like really excited about it and, and just again, the actuation, it just has, it has more motors, it has more IE joints, to control.

EW (00:16:14):

How much does something like this cost?

AH (00:16:18):

Roughly? They're going for -

EW (00:16:21):

Yeah.

AH (00:16:21):

So the last I saw you can get one for about 10 or 11 K. I think they have deals every so often that get it down to about 8 K.

EW (00:16:30):

That's a lot. I mean, that's not a lot for a research team or a lab, or even in college where multiple people will be using it.

AH (00:16:38):

Correct.

EW (00:16:38):

If somebody wants, if somebody is at home or in a small team at a hackerspace, what kind of robots should they be looking at?

AH (00:16:48):

So depending on what their interest is, if they're just looking to do some hacking and they want a little bit of a robot, like a humanoid, I like the mini Darwin. It's a kit, it's fairly low cost. There's a community that puts, like, an Arduino on it. So you can add in a bunch of different sensors, you can add in a camera, things like that. It doesn't come with it as its base kit, but you can add stuff to it. It's a humanoid platform. It has sufficient actuation that you can like, "Oh, okay, we get it." It does things like dance. Like we use it for outreach, so you can make it do things like, you know, dance and walk and kick balls and those kinds of aspects. You just have to, again, if you're a maker, you just have to add in other components if you want it. Like a camera system, for example, if you want additional sensors.

EW (00:17:42):

We talked a little bit about the kinematic modeling. And then I said, well, what's between this and that, because I really do want to talk about all of the other stuff.

AH (00:17:54):

Okay.

EW (00:17:54):

And you just said the camera. So as humans, we see things and that helps us maintain our balance. And it helps us decide what we're going to do next.

AH (00:18:05):

Correct.

EW (00:18:06):

But those are decisions that happen in my brain that I don't really think about.

AH (00:18:11):

Yeah.

EW (00:18:11):

How do you get a robot to do that?

AH (00:18:13):

So, um, and people, as well as robots have different types of behaviors. So there's reactive and there's deliberative. So reactive are those things I'm walking down the sidewalk, I trip, typically I don't fall. Why, because I'm in this reactive mode. My body identifies, it senses, it then goes into, okay, this is not what I'm supposed to be doing. I'm starting to fall. What should I do? That's all done in this reactive level. You know, basically memory, motor control, that aspect. Deliberative would be, I'm at home and I want to go to the grocery store and I know it's traffic and so I think about the best route I'm going to take in terms of my streets. And so that's more deliberative. I'm planning it based on what I remember the streets look like, what I think is the shortest distance, given that there is this traffic pattern and things like that.

AH (00:19:06):

So there's deliberative. We do it both. So our daily lives, we go through both reactive and deliberative all the time without even really thinking that, "Oh yeah, I'm actually doing a planning routine" versus something I'm not thinking about. Even deliberative. How many times have you been in the car driving and you look up, you're like, "Oh, I'm already home." I didn't even really remember what I just saw. You're actually going into more of a reactive mode because you've done it so often that it's now stored in that part of the brain where it's just more of a memory than a thinking process or very deliberative thinking process.

AH (00:19:40):

Robot. Same thing. When we design a robot, we use things like sensors, in terms of, we might use a sonar sensor or an IR sensor to detect local obstacles. So those would be reactive, i.e. if you see something close, stop, like basically, that's it. That is the routine. You don't have to think about it as like an if-then rule. Something close, stop, and then you can start thinking about, okay, now what is this object? Should I go around it? Should I go backwards? And then you start in this deliberative, like thinking about what you should do. And so we program robots in the same way, reactive as well as deliberative, depending on the application, we might have more reaction versus planning and vice versa.

EW (00:20:21):

Are these standard terms? And you have, you know, a software section that is all reactive and a software section that's all deliberative?

AH (00:20:29):

You do. And I actually, I'm sure, I'm thinking when I do my comments, do I actually put in, you know, slash star reactive? You know, I probably do. I think it's more natural now. It's like, okay, these are my, yes, these are my reactive behaviors. These are my deliberative. These are my planning behaviors. These are my obstacle avoidance behaviors. I may actually label them specifically of what I'm trying to do. This is my tripping behavior. This is my "do not hit the kid" behavior.

CW (00:21:04):

So, I had a question that has been percolating. You work with kids.

AH (00:21:07):

Yes.

CW (00:21:07):

And they react to things very differently than adults. And one of the things adults seem to have trouble with is kind of the uncanny valley of this robot or this artificial person-like thing is close enough to being a human. But I can tell there's something wrong.

EW (00:21:25):

And it makes him creepy.

CW (00:21:27):

Yeah. Do kids have that same reaction to things or are they more open to, "Oh, this is just a thing. And, and it's new to me and I will absorb whatever its behavior means"?

AH (00:21:38):

No, they, they have the creep effect as well. There, as an example, I have this one robot that's a monkey head. It creeps out the kids.

CW (00:21:49):

I'm creeped out already.

AH (00:21:49):

Right? And it creeps out the kids. It's just, you know, and it actually, adults are less creeped out because they kind of look at a monkey and they like, "yeah, we know it's some type of animatronic". Right?

CW (00:22:01):

Yeah.

AH (00:22:01):

Kids are creeped out. Like, I'll bring it in. And as soon as I, and what happens is it has reactive behavior. So if you do things like cover its eyes, it'll like start like basically freaking out or you touch its head. It likes that. And so I'll let it go there. I'll cut it on. It does nothing. And then I'll be like, "here is my friend". And I was like, "hi, how are you doing?" And I'll touch his head and it'll go like, "Oooh" and the kids are like, "Oh my gosh, that is such a weird thing". So yes, kids get creeped out as well. Different things creep them out. But yes, they do. I like my monkey head.

EW (00:22:41):

Oh, okay. If the kids are creeped out by the monkey head, which like, Chris, I'm wondering what this looks like. I'm a little creeped out myself.

EW (00:22:50):

What do you use it for? I mean, do they just get over it? Because after a few minutes, it's cute?

AH (00:22:55):

Yeah. After a few minutes, I think they become more fascinated with it. Like after they've gotten over the, "Oh my gosh, that's just really weird". Then they start wanting to interact with it, like, okay, what else can I do? What can I do? What happens if I do this? What happens if I do that? So they get over it very quickly. I think adults kind of keep it like as a grudge. They, I think adults take longer to get over that creepiness.

EW (00:23:21):

Okay. So this all has led to the question I wanted to ask based on one of your recent papers.

AH (00:23:28):

Okay.

EW (00:23:28):

What is human-robot trust?

AH (00:23:32):

Oh, so, this is actually a fascinating phenomenon. We are designing robots, right? To interact with people. So that's really the Holy Grail, is that, you know, we have robots for every home, for different functions that we would like robots to do, which means that they have to interact with people in our home environments. And so there's this aspect of trust, i.e., do I trust this robot to do what it's supposed to do when I want it to do it and will it do it safely and robustly every single time? As an example, if there's a crime in my neighborhood, I trust that the police will come, right? That's just the basic, if there was a fire and I call, I trust that the firemen will, or firewomen, will come as quickly as possible. And that's the way our system works.

AH (00:24:24):

It's actually a lot of trust. I give you money. I trust you're going to give me a service. So with robotic systems, which we all know are faulty still because they're based on programs, there's inaccuracies. We can't program every single kind of scenario. They are going to make mistakes, guaranteed. At least now. Even in the future, they won't be as bad, but they're going to make mistakes. What we found is that people trust robots, even when they make mistakes, which is kind of counterintuitive. So if you have a friend as an example, and they are always late, eventually your trust goes away. You're like, "okay, I can't depend on this friend". So if I need to be somewhere on time, I'm not going to call this person. Our trust decreases based on you making mistakes. What we found with robots is that a robot can make a mistake and that trust is not broken for some reason.

AH (00:25:21):

And the scenario we did is we had a emergency evacuation scenario. So we had a building. We invited people to come in to do a study. We didn't tell them what kind of study. They would come in, this robot would guide them to the meeting room. And the robot was, it could be faulty or not. So faulty would be, as it was guiding, it would take you to the wrong room. Or it would just get stuck and we had, you know, we would come out like, "Oh, sorry, it's broken", you know, as a human, "come, we'll lead you to the meeting room". And then we fill the entire building with smoke to simulate an emergency. They didn't know. They were in the room. The door was closed. They were participating in this research study. 'Cause we had like a paper and they had to answer questions.

AH (00:26:04):

The alarms go off. And of course, if, we're all conditioned that if the alarms go off, typically we always think it's like false alarm, but you know, we're conditioned to, "Oh yeah, let me go out of the building". As soon as you open the door, you see this smoke-filled hallway and the alarms are going off. And so now there's this, "okay? It's not necessarily a drill anymore. There's smoke here and there's fire alarms, and so I need to go out". And we found, is irrespective of whether the robot made a mistake entering, leading them into the room, whether the robot made a mistake as they were guiding...so you would come out of the smoke-filled room, there was this robot basically, it's like, "okay, you know, follow me, go this way". People would still follow this robot, even if... And we have tons of scenarios that we documented where it's like, there's an obvious that the robot's broken, now, is obviously the information they're providing you, now, is wrong.

AH (00:27:06):

And you're still trusting this robot as if it's smart, as if it knows what it's doing. So why is this a problem? We're getting into these autonomous cars that are coming out onto the road. There's a possibility that if I'm in a car and we've seen a little bit of cases with, you know, self-driving, like with Tesla, if I'm in a car, we now are pretty confident that people will just say, my car knows what it's doing, which is why I can read the newspaper while my car is driving by itself, because my car is perfect. And we all know that that's not the case, but people are overtrusting these robots.

EW (00:27:47):

And I see why. We actually have a Tesla and I drove up from where I live near the beach, a long way to San Francisco. Hour and a half, two hour drive. And I let the car do most of the driving. And I watch, I mean, I pay attention, in part because the road I take has surfers that cross the road, no respect for their life.

AH (00:28:12):

Okay.

EW (00:28:12):

And so I let the car take care of the road and I check for surfers. And occasionally for whales. It's a pretty drive. But when I finally took control, when as I approached San Francisco, I realized that I was swerving a little bit in the lane and the car is a better driver than I am. Let's just accept the reality. Unless I'm really concentrating...

CW (00:28:33):

Well, at least for small control kind of operations.

EW (00:28:38):

Yes.

EW (00:28:38):

I mean -

AH (00:28:39):

Right. Right.

EW (00:28:40):

It doesn't cut people off. It's polite. And I am in my own world.

CW (00:28:44):

But that's an illusion, right? Because it's really good at certain control feedback kinds of things. But it's not that, that car in particular right now is not going to make decisions about anything.

EW (00:28:55):

And it doesn't do much planning with, I need to get off here or there.

CW (00:28:59):

But yeah -

AH (00:29:00):

Right.

CW (00:29:00):

But I think like what you're saying is that illusion makes people overconfident in what the car can do.

AH (00:29:07):

Yes, yes.

CW (00:29:07):

'Cause they can't classify what it's actually doing.

AH (00:29:09):

Right, right.

EW (00:29:10):

And I forget that it tends to be scared of shadows. If it's really bright and really dark, the car will like hop away from shadows, which is hilarious. But I should be aware and think about these things.

AH (00:29:23):

Right, right. And you are but then you'll still forget -

CW (00:29:24):

Yes.

AH (00:29:24):

Because it'll be driving so nicely and then you just kind of forget that it had this problem.

EW (00:29:29):

And there are whales and surfers.

EW (00:29:29):

Yes.

CW (00:29:30):

Yes.

EW (00:29:30):

But it also, with your test, as you described it, part of me is like, well, yeah, if it led me to the wrong room, that's the programmers fault. That's not the robot's fault.

AH (00:29:44):

Right. Which is, uh, yeah. But who do you think is making these robots?

CW (00:29:54):

More robots. It's robots all the way down.

AH (00:29:54):

It's this disconnect.

CW (00:29:55):

Yeah.

AH (00:29:55):

It's really this disconnect that, you know, the robot is this. And I don't want to say sentient being, but the robot is this intelligent learning creature. And therefore it must be better than the programmer that programmed it.

EW (00:30:11):

I expect more consistency from it.

AH (00:30:14):

Hm?

EW (00:30:14):

I expect more consistency from it.

AH (00:30:17):

Yes. And in certain scenarios, it is very much more consistent, but that doesn't cover everything. And so, this is a concern because we're not at the stage where, you know, your self-driving car can do everything. As another example, we did a survey of exoskeletons, so exoskeletons, which, you know, help, um, there's both clinical and at home, but they basically help individuals walk. And we did a survey about, you know, what would you let your child do wearing an exoskeleton. These were parents that had children that use the exoskeletons in the hospital. And it was amazing what percentage of them said, "Oh yeah, I would let them climb the stairs". "Oh yeah. I would let them try to jump". And I'm like, you know, there's this bold thing in the directions that says, you know, this is not certified to allow individuals to XYZ, but because they become so comfortable with its use and they're like, "Oh, it's helping, it's gotta be able to do more than walk".

EW (00:31:26):

I worked on an exoskeleton for a short period of time. And when the operator who usually used it to test software would walk around, it was so good and clear and obvious. And then he would go up the stairs and it was perfect. And then he would go down the stairs and mostly trip and it was awful, but the jumping, the assisted jumping looked like so much fun. And yet, of course, you're not supposed to do that for many reasons.

AH (00:31:58):

You're not!

EW (00:31:59):

It's bad on so many aspects.

AH (00:32:02):

Right. But yeah. But again, yeah. You're like, "Well, it's capable. I try, you know, I tried a little bit of a test and it worked. So obviously..."

EW (00:32:12):

I did a skip. Let's try the high jump next.

AH (00:32:16):

Yeah. And so again, I think it's just this belief that robots are so much more capable than they are. And eventually, you know, we'll get to that point, but then there'll be, it'll just, it'll move. It's a moving target. It's like, okay, now they're this capable. And then people will be like, Oh yeah. Now I must be able to do this. And it's like, no, not, we're not there yet. And then when we get there, it'll be like, "Oh, but it must be able to do..." It's a moving target. So...

EW (00:32:40):

Do you think the seemingly continuous and widespread security problems that we hear about will convince people that software is not perfect and shouldn't always be trusted or you think it's just so different?

AH (00:32:55):

I think, one, it's different. I think, I mean, we haven't even, we haven't even talked about things like hacking hardware. I mean, that's not even really been big in the news yet. But it'll at some point get there, which is kind of scary as well, because that also means you can hack robots -

CW (00:33:15):

Right.

AH (00:33:15):

- if you can hack hardware. And I think that's a disconnect. I think people look at those as two different things. I think they look at the robot as this physical thing that has behaviors and it can do what it's doing and it's really good at it. And therefore, if I want to push it a little bit more, it'll be okay. That's disconnected from, you know, is it safe? Is it effective? Is it efficient? Totally different viewpoints.

EW (00:33:47):

And because we interact with it in the physical world, as opposed to cyberspace, it can hurt us. I mean, it can kill us.

AH (00:33:56):

It can hurt us. Right. It can hurt us. Um, yeah.

EW (00:34:01):

Okay. Onto more cheery topics, the cognitive behavior side of this.

AH (00:34:08):

Yes.

EW (00:34:08):

You were talking about how to make things appear emotionally responsive. Are you actually trying to make the robot a little emotion center or is it all playacting and pretend?

AH (00:34:22):

So it's, and I would say it's playacting, but that's a caveat. I would say playacting because there is no like data. There is no emotion chip. Well, maybe someone will create it, but right now, you know, there's no emotion chip. Maybe I'll program a little neural network to exhibit emotions, but there's no emotion chip. It does not exist. So because of that, it's me as a programmer, as a roboticist programming, certain characteristics, I think are necessary for exhibiting emotions. So as an example, if I'm playing a video game and I lose, typical, there's a typical emotional response, and I can model that. So I can have, you know, 10 folks come in and play a video game and, you know, I can make it so that they all lose and I can then figure out, you know, what is a typical emotional response when you lose?

CW (00:35:14):

Rage-quitting.

AH (00:35:15):

And then I can take that - rage-quitting, right, throwing something. I can take that and then code it into my robotic system. And so it's still there. The robot still reacts. It still has that. So it's real, it's a program it's code. So it's real, but it's programmed, it's not say a behavior that's learned based on childhood and experiences, personal experiences as learned from the experiences of others. And so as long as the others are true, then I would say that, you know, well it's learned from others and maybe the robot doesn't feel that, but it's trying to mimic it. So that's basically the emotional aspect.

EW (00:35:58):

Are you mimicking it as a neural network or as a loss function, as an input to neural network? Or are you just recording the facial or body expressions and then playing that back when the robot loses a game?

AH (00:36:15):

No, no, no, no. So it's actually, we use basically as an input two methodologies, either an SOM or a case-based reasoning system. So it is, there is some AI in terms of the learning, so it is learning. But some of that input is looking at people's facial expressions. We look at that, we look at their body language, we look at their expressions. Um, there's actually some research that's done in terms of vocalization of when you're angry, you know, what happens to your voice? And so we extract that. And so that's how we represent what a happy emotion is or a sad emotion is, in terms of what the person is doing. So we can say, okay, I don't really know if the person's happy, but I'm looking at their vocalization, I'm looking at their facial expressions. And I have learned that that is designated as frustrated.

AH (00:37:08):

And then frustrated is part of an input and to an SOM or case-based reasoning that says, you know, if I'm interacting with a child at this stage and they're frustrated, or they're angry, then I need to provide this type of motivation. I.e. if a child is playing a game and they've been losing three times in a row, it's probably not appropriate to say, "Oh, that was wrong", right? It's more appropriate to have some type of motivation, like, "Oh, we could do it again. I find that hard too". So we, but you have to kind of learn that. We know that 'cause it's years of experience. And so we look at how clinicians and teachers, how they use feedback based on these characteristics of state, of the child's state.

EW (00:37:57):

You said SOM and case-based.

AH (00:37:59):

Oh, so, yeah, I'm like throwing these, I'm sorry.

EW (00:38:05):

This is like a whole class, isn't it? Like those two words alone are a whole semester class.

AH (00:38:09):

Yeah. So case-based reasoning is basically, you take examples from people doing things. So that would be in a case, and you take a bunch of examples and you come up with a, basically a generalization, so that if I see this case, i.e. I see the child doing something, I match it to this dictionary of cases, and I find the closest match and then say, "Oh, this scenario", and you have a label. This scenario looks like this previous one that happened. And this is what happened in terms of the clinician and the child. And so that becomes a case. And so we collect a bunch of observations based on people and people interacting. So that would be a case, whereas an SOM is called a self-organizing map. It's basically a type of neural network. So it's basically a neural network, and there's a bunch of different types of neural network. This is just an example of one.

EW (00:39:05):

So we have, we started with the physics, and now we're talking about psychology. I mean, cognitive behaviors, and this is a whole subfield.

AH (00:39:14):

It is.

EW (00:39:14):

And physical responses to things. You've mentioned neural nets. And with camera, of course you need some machine learning in there.

AH (00:39:23):

Yes.

EW (00:39:25):

The physics, the math, the software.

CW (00:39:28):

Tell us what you don't need to know.

EW (00:39:28):

Yeah. Or really, how do we get started in this without feeling completely overwhelmed and unable to do any of it?

AH (00:39:39):

So the one thing that I like again about robotics, well I like it anyway, but, is that, it's one of the methods that you can use exploratory learning techniques. So as an example, very simple, I'll use Legos since Legos was so popular. So you bring a Lego system and you say, "let's follow a line", which is actually a fairly straightforward task. You know, I put the robot together, you can talk about sensors and lines and things like that.

AH (00:40:06):

And so the robot goes on and then you do something like, well, what happens if I block your line? And you're like, child says, "I don't know". Well, let's see what happens. And the robot goes, and you block the line. And depending on what the program is, the robot goes into fits. Right. So it starts to wandering. 'Cause they can't see a line anymore or you don't know what it does.

AH (00:40:27):

So it's like, okay, let's figure out about obstacle avoidance. Let's figure out. And so then you start building up and then what happens is at some point you're like, "Oh, well, there's a bunch of different lines. There's thick lines. There's thin lines. How are we going to program this all?" "I don't know." "Well, let's think about learning it. Let's think about learning characteristics of a line. How would you do that?"

AH (00:40:47):

And so one of the things about robotics is you put it in the field and it's gonna mess up and that's where you go, "Oh, let's introduce some other aspect. Let's introduce this, let's introduce kinematics. Let's introduce, you know, XYZ" because you see it. You're like, okay, this is broken. How do I fix this? And then you go to the next subject matter, the next lesson, the next, okay, let's look at a different sensor. Let's look at a different methodology. Let's look at vision, let's look at cameras. So you start building up and then you look up, you're like 10 years later, like, "Oh my gosh, I know so much!"

EW (00:41:22):

10 years later, yes. I mean, I have been working on my little robot almost exclusively for a month. And I have been through six different books in five different disciplines. And every night I go to sleep and my brain is full and there was the night I hallucinated and demanded that Christopher align my axes.

CW (00:41:44):

Alright.

EW (00:41:51):

Uh, yeah. So what is the starting point? If somebody has a small amount of money, a couple hundred bucks, where can they really get started?

AH (00:42:04):

What's their age?

EW (00:42:09):

Well, you know, we didn't ask that in the survey.

CW (00:42:12):

You know, people who are hobbyists, adults.

EW (00:42:14):

Hobbyists, adults, engineers, software engineers, hardware engineers, mechanical, well, not mechanical engineers 'cause they have such an advantage.

CW (00:42:21):

Of course, you know, it doesn't have to be adults, but people who can handle a soldering iron, let's go with that.

AH (00:42:26):

Okay. Um, where would I start? So if they're adults and they're like makers, so they, they like to build stuff, they put stuff together. They're comfortable with say an Arduino, you know, that they can buy some type of motor controller on it. I would actually go to SparkFun and just start putting parts together, honestly.

EW (00:42:48):

Alright. But that was pretty much what I did. Well, I started out with a very high-end board, especially for machine learning.

AH (00:42:58):

Okay.

EW (00:42:58):

And then I went from there.

AH (00:43:01):

You see?

EW (00:43:01):

But the Raspberry Pi and BeagleBone boards are very capable as far as robotics go.

AH (00:43:08):

Yeah, Raspberry Pi, both of them.

EW (00:43:08):

I think those are great. But you've run out of space pretty quick.

AH (00:43:11):

No, you do. They are pretty basic, but as soon as you get into anything more advanced yeah. You're kind of like, "Oh", although there are some Arduinos that have more capabilities, but then you're adding in more costs. And so then you're like, "Oh, why didn't I just buy a Raspberry Pi in the first place?"

EW (00:43:27):

Yeah. And listeners, I did get the note that you want to know how to get out of the Arduino space and into the more professional spaces. We'll do a show about that. I promise. In the meantime, Arduinos are awesome. Let's see, Georgia Tech, it is a pretty famous school and the robotics part, I mean, I've heard about robotics and the software and the hardware parts many times, but what's it like as a school?

AH (00:43:57):

Georgia Tech is a very creative place to be as an engineer. What happens is students come there, they're very concerned about the social impact that their work is. So it's not just, "Oh, I'm coming because I want to be a great engineer". Yeah, they are. But they are also very interested in "how can I change the world?" Whether it's, you know, in the healthcare space, whether it's, you know, in the people space, whether it's in the, you know, world environment space, there's this concern about, I want to use my engineering to change the world.

AH (00:44:31):

So that's kind of the vibe there. Which makes it really interesting because you'll go there like, yeah, this is Georgia Tech. Were are all the geeks? It's like, they're all here, but they're like real people and because they have interests and they like to do things that are what you would consider more social. So I think it's a great place. It's not what people would think when they, when the graduates know and the alums know, but I don't think a lot of people would know that it's a real school. I mean, it is a real school, even though it's a techie school, people have lives there.

EW (00:45:07):

That makes sense. Sounds like fun.

AH (00:45:10):

It is.

EW (00:45:10):

I went to Harvey Mudd and there was a little bit of applications, understanding how your engineering affects the world was a big part of it. And I appreciated that. And actually it makes sense knowing the Georgia Tech grads. Yeah.

AH (00:45:27):

Cool.

CW (00:45:27):

Yeah.

EW (00:45:28):

Another random question. You're on EngineerGirl.org.

AH (00:45:33):

Yes.

EW (00:45:34):

What is that?

AH (00:45:35):

So that was a project that was started many years ago. That was K through 12. If you're a young lady, young child, young student, that wanted to figure out what this engineering was. And you just wanted to question and you wanted to talk to a mentor, it's a forum that allows you to do that. So, as an example, I'll get a question probably about once a month from a student that says, asks a question, like things, as basic as how many hours do you work a week to, I'm really interested in robotics, you know, what kind of engineer should I be? And so you get general questions, but it's really a forum.

EW (00:46:14):

And so it's not a lot of required time, but you do provide some service to help people, to help young women who want to know more?

AH (00:46:26):

Correct. Correct. Yes.

EW (00:46:28):

That's pretty cool. I should look into it. Do you need, like, do you need a PhD?

AH (00:46:33):

No, I, basically, I, the only requirement to be a mentor is to be an engineer. So, basically have, not even graduated, but be an engineer 'cause, you know, there are some engineers that weren't classically trained. And so either an engineer graduated with an engineering degree, doing engineering in some form or fashion, and could of course provide good advice.

EW (00:46:59):

Cool. As a roboticist, what do you think of Robot Wars?

AH (00:47:06):

So I think that things like Robot Wars give certain folks the ability to express themselves in ways that they enjoy. But I also think that in some regards it gives robots a bad name because it's pretty easy. I can show you five movies and four of the five have robots being at war, angry, destroying, things like that. You know, so to have that in real life, it's like, "Oh, no, robots are good." It's like, "show it to me". It's like, you know, "robots are good. Just trust me". So it gives, it makes my life harder trying to explain the good of robotics sometimes.

CW (00:47:57):

Yeah. I think it is an easy way out for writers sometimes to make an enemy of it though. It's okay if we kill robots, 'cause they're not, they're not alive.

AH (00:48:05):

Yeah..

CW (00:48:05):

So that happens a lot. And so that is kind of a subconscious problem possibly going forward, is if robots are constantly perceived as nefarious -

AH (00:48:20):

Evil.

CW (00:48:20):

- or enemies or about to take over, then as they get more and more advanced, more suspicion will creep in.

AH (00:48:30):

Right. Right. And so I have mixed views because again, it's robots. So then it's like, "Oh yeah, but it's like the only show that has like robots in the title. So, you know, rah-rah!". So that's the reason why I was a little hesitant on my answer because my philosophy is not pro, but I just, I mean, we just need more robots in general, out there anyway. And it engages folks in terms of the makerspace and things like that.

EW (00:48:59):

That' fair.

CW (00:48:59):

I don't think they're really robots anyway. I think it's a misnomer.

AH (00:49:05):

Well, they're not autonomous.

CW (00:49:08):

Yeah.

EW (00:49:08):

But I believe there's a whole podcast devoted to whether it's a robot or not.

AH (00:49:12):

I know. So then the question is okay, 'cause I always think about this, I wouldn't necessarily classify them as autonomous robots, but if you have a robot body with a human brain, is that a robot or not? And then you can think about Robot Wars. You're like, well, that's a robot body with a human brain, so...

EW (00:49:33):

Oh, but the brain's remote.

CW (00:49:34):

Wow.

AH (00:49:35):

Yes. So you could have a whole conversation about that. Philosophical.

CW (00:49:38):

But maybe there's cyborgs at that point then, right? Alright.

EW (00:49:46):

Where do you think robotics will be in 10 or 15 years?

AH (00:49:50):

So 15 is easy. I will say that everyone will have a robot in some form or fashion. Just like everyone pretty much has a smartphone-ish.

EW (00:50:01):

Yes!

AH (00:50:01):

Now what type of robot is a question, but I think everyone, like right now, if you meet anyone over the age of 15, that doesn't have a smartphone, it's like, "Hm, all right, there's something wrong with this". This is just at least in the developed countries. I think robotics will be the same. It might be your car. It might be your vacuum cleaner, it might be, you know, that cashier at the restaurant, it's going to be some type of robots.

AH (00:50:25):

10 years, I think we'll be seeing that transition. I think in 10 years it'll be like, you know, when the iPhone first came out, it was like, there was a very small population. Everyone wanted one, but there was a very small population that actually got one, I think in 10 years, it'll be like that. They'll be available. They'll still be a little costly. It'll only be a certain, you know, the early adopters kind of thing, but people will know about it. Like, "Oh, they finally have the level, what is it, 5 driving car that's out. You know, that kind of thing.

EW (00:51:01):

That's actually a good point because the iPhone celebrated its 10th anniversary, just a week ago or last week. And the change is remarkable. Having gotten a machine learning board and started to learn a little bit about that and a little bit about robotics, I'm a little scared, I mean, these things can do a lot. And as I look at things like Robot Operating System, which I don't love, but it's interesting in that it is allowing an integration that wouldn't otherwise be possible, you get so far into each field. And when we talked about how many different fields robotics covers, and I can't be an expert in all of them, but if I can get parts from experts from all of them and put them together and you end up with a humanoid body that works smoothly and beautiful and machine intelligence that looks really kind of scary and that's all today.

AH (00:52:06):

Right. I imagine in-

EW (00:52:08):

I don't want to say the word singularity, but I think I'm going to have to.

AH (00:52:12):

Well you know, so this is my philosophy about the singularity. One, if it's going to happen, it's going to happen. But the fact is when we create robots and we create these intelligent creatures and things like that, they are learning from us and they're learning things like emotionals and bonds and things like that. And so I truly believe it, that if we are creating this sentient being, the sentient being is part of our world, is part of our environment. So at the end, it's part of our family and all, we, we all know there's evil people, but most times you're like, yeah, my family members acting up, but you're not going to go out and just kill them. So my philosophy is like, yeah, maybe there might be, maybe it will come. But they're part of us. They're our family, they're our environment. And so for them to even think about destroying us means that they're thinking about destroying their family, which I don't think because they were growing up in our world. So that's just my philosophy.

EW (00:53:10):

So after the robot singularity, the robots are just going to spend all their time watching cat videos?

AH (00:53:15):

Exactly! Right! Because 15 percent of their knowledge is based on watching cat videos.

EW (00:53:28):

I keep thinking I want to ask you more details about my robot and what I should do with it and all of that. But if we do that, this show is going to be another hour. So I should ask Christopher, do you have anything before we start to close up?

CW (00:53:45):

What are the kids that you're teaching most excited about?

EW (00:53:51):

This is the college kids.

CW (00:53:51):

Yeah, the college kids. Sorry.

AH (00:53:53):

Oh, the college kids. I'm like "Oh, my kids?"

CW (00:53:56):

What are the college adults that you're teaching?

AH (00:54:00):

I think that they're excited that they can actually have jobs in robotics. I mean, at the end of the day, to be realistic. So even 10 years ago, if you were interested in robotics, the job, it was sort of a robotics job, but not really, you know, you weren't going to be working on necessarily a robot. Now it's like, kids are graduating. It's like, yeah, I'm, I will, I'm actually going to be a roboticist. I'm working on a robot. I think that's exciting for the kids. They're working on things that they have, you know, grew up with in the science fiction. That's now a reality. So we grew up in science fiction and we became adults and it was still science fiction. They're growing up with the science fiction and they're becoming adults and it's like, "Oh, I get to work on this stuff that was science fiction". I think that's exciting.

CW (00:54:47):

What do those robots look like that they're working on? I mean, I'm trying to think, okay. When I graduated from college, what would a robot be that you could get a job working on? And it would be pretty much an industrial arm you know, in a factory. So what do those jobs look like now?

AH (00:55:04):

I mean they're working on places like Waymo and the autonomous car. They're working on healthcare robots. Like the one that delivers goods in the hotels and laundry. They're working on surgical robotics like da Vinci. I mean, they're working on real robots.

CW (00:55:20):

Yeah.

AH (00:55:20):

That's the thing, not just industrial. They're working on kind of the wealth of things that are going on.

EW (00:55:27):

Hi Svec. I know you also work on robots. He works at iRobot, making vacuuming robots.

AH (00:55:33):

Oh, see?

EW (00:55:33):

Okay. So kids, college students is what we should've said there.

CW (00:55:39):

Sorry.

EW (00:55:39):

What about kids? What about the younger group? What kind of robots are they working with? What are they excited about that you see?

AH (00:55:48):

So with them, they're still working, I think in terms of educational robotics systems, they're still working on things like with the Lego, maybe Dash and Dot, if they're even younger, they're still working on those kind of platforms. So there's still a little bit of a disconnect between that and like working on like real robots, like vacuum cleaner robots. And so I think then, they think about robots, they are still in the fantasy land. Like they're still thinking about, you know, the Rosies because they don't have as much touch to it. Like they're not in the schools working on humanoids, for example, whereas the college kids are, you know, they're more likely than not to get a robot kit. That's a humanoid robot. More likely than not. Not necessarily in the K through 12 space yet. And that's just a yet.

EW (00:56:41):

And earlier I asked if somebody wanted to get into robotics and, and then we specified adult hobbyist, likely engineer. What about for kids? What if there's a high school student or a middle school student, or even a parent wanting to engage with their high school, middle school students? What kind of robotics should they look at?

AH (00:57:03):

So I mean, as, as much as, well, I still think that the Lego robotics kit, because of the amount of curriculum that's been developed to support it, is still a really good package. And there are some platforms that are coming out that are more isolated. Like I said, the Dash and Dot robots. There's the Cozmo robot or Cozmobot robot, which is this little tiny platform that's come out maybe a year ago. That's starting to develop curriculum as well. So I'm gonna, I'm excited to see some competitors in that space. But right now it's, you know, you need the curriculum, you need the lessons you need, basically the guidance, like documentation to get through, figuring it all out. So right now it's Legos and FIRST robotics and the Vex kit, uh, that are associated with some of the competitions.

EW (00:57:55):

And this is Lego Mindstorms. Is that still it?

AH (00:57:58):

Yeah, Lego Mindstorms. Yeah. Lego Mindstorms as an independent. And then if your team, again, there are some teams like FIRST and Botball and Vex, which are team-based as well.

EW (00:58:13):

And the advantage with that as you were saying was the curriculum, which is good for parents because you do reach the point of, okay, I've put it together. What now? And the curriculum says, okay, now follow a line. Now that you can follow a line, you can have it deliver brownies from the kitchen.

AH (00:58:30):

Right. Exactly.

EW (00:58:33):

That makes a lot of sense. Well now we really have kept us overtime.

AH (00:58:37):

Alright.

EW (00:58:37):

Ayanna, it has been great to talk to you. Do you have any thoughts you would like to leave our audience with?

AH (00:58:47):

So, I have this quote that, in fact, it's on my webpage, which is basically about being an engineer. And I think one of the things, and this is what the social impact, I think as an engineer, we do have a responsibility. We have the talent, we have the skill, we have the responsibility to make this world a much better place than the way we are in it now. And I think that we have a power to change the world, make a positive impact, and we should use it for that.

EW (00:59:17):

I 100% agree, although you stole my final thoughts, so -

AH (00:59:28):

Oh! Sorry!

EW (00:59:28):

Our guest has been professor Ayanna Howard, Dr. Howard is Professor and Linda J and Mark C. Smith Endowed Chair in Bioengineering. We never got back to the bioethics. Chair in Bioengineering, in the School of Electrical and Computer Engineering at Georgia Institute of Technology. Thank you so much for being with us.

AH (00:59:52):

Thank you. This was fun.

EW (00:59:54):

I want to thank Christopher for producing and co-hosting, and thank Hackaday for their 30% off coupon. I forgot to say that for the MeArm at their store, I'm using the code awesomeembedded. And of course, thank you for listening.

EW (01:00:12):

I have chosen a new quote to leave you with. And this one is sort of about singularity. It's from Emily Dickinson. "Hope is the thing with feathers- / That perches in the soul- / And sings the tune without the words- / And never stops - at all -."

EW (01:00:32):

Embedded is an independently produced radio show that focuses on the many aspects of engineering. It is a production of Logical Elegance, an embedded software consulting company in California. If there are advertisements in the show, we did not put them there and do not receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.