367: Data of Our Lives
Transcript from 367: Data of Our Lives with Ayanna Howard, Elecia White, and Christopher White.
EW (00:00:06):
Welcome to Embedded. I am Elecia White, here with Christopher White. We are very pleased to welcome Dr. Ayanna Howard back to the show.
CW (00:00:17):
Hello, Ayanna. Thanks for coming back.
AH (00:00:19):
Thank you. This will be fun.
EW (00:00:21):
Could you tell us about yourself as if it had been two or three years since we last talked to you?
AH (00:00:29):
So I am a roboticist. My official title is Dean of Engineering at the Ohio State University. But really when I think about who I am, I'm defined by the things that I build, which is robots. I program and design robots for the good of humanity.
EW (00:00:49):
And your position at Ohio State University is new. You were at Georgia Tech?
AH (00:00:55):
Correct. Prior to Ohio State, I was Chair Professor at the School of Interactive Computing at Georgia Tech.
EW (00:01:04):
I remember, we talked to you last time, you were at Georgia Tech.
AH (00:01:08):
I was.
EW (00:01:08):
And it was before you were even a chair, I believe.
AH (00:01:11):
Yeah, so I'd been at Georgia Tech for 16 years. And so I was Associate Professor, Full Professor, Endowed Chair Professor, had a bunch of different associate chair positions, director positions, and then Chair.
EW (00:01:28):
Well, I'm excited that you're a dean now. That's a position of power. I hope you use it wisely.
AH (00:01:36):
I will, I will. It also allows me to set a gold standard for engineering nationwide, which I'm excited about.
EW (00:01:44):
Okay. So before we ask you more questions about that, and about your book, we want to do lightning round. Are you ready?
AH (00:01:52):
Yes.
CW (00:01:53):
Favorite fictional human.
AH (00:01:57):
Bionic Woman.
EW (00:01:59):
Favorite building on the Georgia Tech campus.
AH (00:02:02):
Klaus.
CW (00:02:04):
Best food in Atlanta.
AH (00:02:07):
Sushi.
EW (00:02:09):
Three words you'd like other people to use to describe you.
AH (00:02:13):
Humble, compassionate, and intelligent.
CW (00:02:18):
Do you think the singularity is closer than it was five years ago?
AH (00:02:22):
Yes.
EW (00:02:23):
What is the best animal shape for a robot that is supposed to engender trust in its users?
AH (00:02:31):
Dog.
EW (00:02:31):
Yes.
CW (00:02:33):
Yeah, definitely.
EW (00:02:35):
Well, those little seals are really cute.
CW (00:02:38):
Yeah -
AH (00:02:39):
They are.
CW (00:02:41):
Pretty sharp teeth. Complete one project or start a dozen?
AH (00:02:47):
Oh, complete one. To 90%.
EW (00:02:52):
Do you have a tip everyone should know?
AH (00:02:54):
Believe in yourself.
EW (00:02:58):
That's a good one. Okay. You wrote a book called "Sex, Race and Robots: How to Be Human in the Age of AI."
AH (00:03:10):
Yes.
EW (00:03:10):
I mean, I want to say what's it about, but that kind of explained it, didn't it?
CW (00:03:13):
Does it?
AH (00:03:15):
Oh, I don't know, because it just means "Sex, Race and Robots" -
CW (00:03:18):
I think there's some details that -
EW (00:03:18):
Yes.
AH (00:03:18):
If you're like, "Wait, sex? Race? Robots? Where are we going with this?" Yeah, so, one, it makes you interested in what's going on.
EW (00:03:30):
It was very enjoyable, but it's only available on Audible audiobook. How does that work?
AH (00:03:37):
So, and it will be in print, but probably not for, I think it's a year and a half from now. And so, the Audible is basically the spoken word version of the written book.
AH (00:03:52):
And so one of the reasons why it was an Audible, an audio, was really to drive in the accessibility so that it felt much more human, much more connected, for people to really delve into some of the concepts of the book.
EW (00:04:11):
When I was listening to it, it was very warm. It felt like I was hearing from a friend about their life and the technology all at one time. Is it for a general audience or for a technical one?
AH (00:04:30):
No, it's for a general audience. But there's a little bit of a trick. So it's for,...I would just say, a general audience that just wants to know a little bit about artificial intelligence...Also, maybe I would say, intellectually curious.
AH (00:04:45):
But if you're a tech-head like me, I dribble all these kinds of things throughout. So, even if it's, you're a general reader, if you're a tech person, you're like, "Oh! Oh my gosh, that is so funny. Oh, I got that. I remember that." And so,...the Easter eggs are in there.
EW (00:05:03):
They definitely, definitely are. You grew up in Pasadena, California, and I grew up inland from there, but still in California. And there were so many things you talked about that I was like, "Oh, I remember doing that."
EW (00:05:20):
And even the tech,...you talked about when the internet didn't really exist, and how it started at the beginning of your career, and how it changed everything.
AH (00:05:31):
Yes. I mean modems, people, students are like "Modems, what is that? Telephone line? Landline? What is that?" Yes.
EW (00:05:41):
Did you have someone in mind when you wrote it?
AH (00:05:47):
So when I wrote it, I really thought about, kind of the young, up-and-coming students that had just graduated really. Thinking about them being the next generation that is really going to live in this world that we're defining with the use of artificial intelligence.
EW (00:06:08):
So your subtitle promises how to be a human in the age of AI. So I feel like I really should ask you this. How should we be humans in the age of AI?
AH (00:06:18):
I mean, we should embrace our individuality for one, and our differences. And I think, so that's the big thing. The other is, is that I think we need to lean into those things that make us human.
AH (00:06:31):
Those things such as relationships between people. Those things such as teaching, and learning, and creativity, and understanding social justice. Those things that really define us as human are the things that we need to lean into, given that we are living in this technology-infused world that is becoming much more technology-based.
EW (00:06:53):
But in part of the book, you advocated against emotions.
AH (00:07:00):
So I advocated against people getting emotional, right? So emotions,...they're actually part of our DNA. And I talk about emotions, from very early on, it's a survival mechanism, right? And the way that we react to people,...some of it's a learned behavior, some of it's reactive, some of it's in our genetics, DNA.
AH (00:07:23):
But when I was saying, we shouldn't be emotional, it's that, when we're interacting with robots, when we're interacting with AI, a lot of times we interact based on an emotional connection, right? It's very much like, back in the day when email came out, and people would send emails all case letters, because they were angry, right?
AH (00:07:49):
That's an emotional rendition of you getting angry. And everyone's like, "Oh, you're angry, stop yelling," right? And when I say, stop being emotional, stop being emotional with these robots, you're being reactive.
AH (00:08:03):
You're going to the first search term because "Oh yeah. Yes, yes. That's it. That's it. That's it. I'm excited about it." We need to just pause, and stop, and think a little bit. Otherwise we will get in trouble.
EW (00:08:15):
By "in trouble," do you mean horrifically manipulated by the powers of big business?
AH (00:08:22):
Yes. Horrifically manipulated, and not even really think about it until it's too late.
CW (00:08:28):
But I yell at my computer all the time. Nothing bad has happened yet.
AH (00:08:35):
Well, okay, so you yell at your computer, but then do you go on a spending spree so that you feel better?
CW (00:08:41):
I don't think -
EW (00:08:42):
Oh, don't ask that question.
CW (00:08:43):
I don't think those are connected.
AH (00:08:48):
I don't know. So look at this. Imagine I'm a big company, and I want you to go on a spending spree, right? So I can intentionally do things because I know how to manipulate your emotions to make you angry.
AH (00:08:56):
And if I know that your trigger for when you're angry is to go shopping, right? It's like, "Oh, you're the perfect model. I'm just going to do a couple of little things. I'm going to put things on your feed just to get you riled up, because I know then I'm going to make a lot of money, because you're going to go shopping."
CW (00:09:11):
That's a degree of sophistication beyond,...when people who are kind of paying attention pay attention to algorithmic advertising, and that kind of thing. They're usually thinking about, well, this company knows where I surf and what kinds of things I like.
CW (00:09:26):
Facebook is seeing I've gone to these sites, and therefore I'm interested in analog synthesizers, or something like that. And so I end up with endless analog synthesizers in my Instagram feed.
CW (00:09:38):
But what you're saying is, taking that a step further, and trying to figure out this person takes certain actions based on certain, let's call them activations.
AH (00:09:49):
[Affirmative].
CW (00:09:51):
And therefore we can go further than just knowing what they like and know when they're most likely to take action?
AH (00:10:00):
Correct.
CW (00:10:01):
That's not very nice.
EW (00:10:02):
Well, and a lot of people when they're emotional, their barriers are lowered for all kinds of things.
CW (00:10:09):
Yes. And I think of con men doing that. I haven't really considered codifying an AI con man.
AH (00:10:17):
Yeah. And I don't think that people think of it as codifying an AI con man. I think it's just that people are driving, at the end of the day, people want to make money. Businesses want to make money. So what is the best way to do that? You want to maximize your touch points. You want to maximize the output.
AH (00:10:39):
And so...I wouldn't say one of the nice things, but one of the nice things about AI is that, with all the data that's out there, you can do these two or three steps removed, and figure out someone. And figure out what their triggers are, and figure out what's the next step.
AH (00:10:54):
As an example, if I start suddenly seeing you searching for colleges, right? And I know that you're a college-educated person, mother, and if you're searching for colleges, that probably means you might have someone in your household that's about to go to college.
AH (00:11:13):
You know what? That means that, hmm, in about two or three years, you might be thinking about graduation, maybe a nice trip, maybe a car for the child, maybe in year two, right? These are things that we know happen in life. The data can show it.
AH (00:11:31):
You just have to think two or three steps ahead is all. Some human basically has to say, "Okay, take all the data. Think two or three steps ahead. What's the next thing, two years from now or three years from now?"
EW (00:11:43):
How much of this is codified, intentional, and how much of it just comes out of the data of our lives?
AH (00:11:52):
Most of it is coming out of the data of our lives right now, but we know that there is experimentation going on to try to codify it a little bit better.
CW (00:12:03):
Yeah. Because we hear a lot about bias in training of machine learning. Certainly the racial bias of a lot of image sets is one that's a famous example, but I haven't heard a lot about kind of weaponizing bias intentionally yet. And I feel like...it's just not being talked about, even though it's probably being done.
AH (00:12:25):
It is. I mean,...some of it is rumors. Some of it is media that's found leaked messages. But I remember there was one leaked study where one of the advertising companies was selling profiles of teenage girls.
AH (00:12:45):
And if anyone has a teenage girl, they can get very emotional. But they were selling it as an advertising mechanism, as a profile.
AH (00:12:55):
So what does that tell you? Sellers feel that there's some folks that know that teenage girls can be emotional certain times of their journey as a teenager. And one of the triggers that they had mentioned in this report was depression.
EW (00:13:15):
Of course. I mean, of course we get profiled. I mean, they ask us some of these questions. What is your age range, 40 to 45, 46 to 50?
EW (00:13:28):
They ask us our incomes, our genders, our identities, our likes, whether or not we're managers or engineers, whether or not we like vanilla ice cream or chocolate. And we fill out these forms, and they aren't all the same form, but we're not as anonymous as we'd like to be.
AH (00:13:51):
Not anonymous at all.
EW (00:13:54):
Yes, exactly. How do we stop feeding into this machine?
CW (00:14:00):
[Laughter].
EW (00:14:00):
Can we?
AH (00:14:04):
So...the problem is, is I don't think we can at this point. And it's only because we have so many services that are provided to us that is based on the data now, right?
AH (00:14:17):
So what that means is that if we suddenly decide that I individually am not going to give any of my data out there, I'm going to scrub myself from the web, like that's even possible, it also means that you won't have access to all the things that you might want to in terms of better loan rates, if you're on that side of the spectrum, or better health care options, right?
AH (00:14:43):
And so right now there's these profiles that are created that are beneficial, right? It's just that they're also detrimental to certain groups in certain populations. But what I do think is that we can control how the data is being used.
AH (00:15:00):
I think we can ask for a little bit more transparency on...consenting and knowing, "Okay, if I'm giving this data really, what are you doing with it? I just want to know, and don't lie."
AH (00:15:18):
It's like, "I'm giving you a free coupon. What else am I giving you?" "No, no, no, no, no. You're giving me a free coupon, but then you're also using it because you're selling it to...advertisers, right?" Give us the option to decide how it's being used when it's being used against us.
AH (00:15:35):
That's really I think where we are now, because I don't think we can change the clock to say, "Okay, now let's just stop collecting data."
EW (00:15:44):
I do, often, whenever I'm asked about cookies, want to thank the entire European Union for GDPR, for making everyone actually be a little more transparent.
AH (00:16:00):
I agree. And as you know, it's also seeped into California and San Francisco area. I think Washington has also looked at some of these GDPR-related kind of regulations as well. I think it's moving in the right direction.
EW (00:16:20):
Okay. So that's how all of us are being manipulated by AI. And it's easy to understand that, because the more data we give them, the easier we are to target. But for us as white, cishet, probably more than middle-class at this point folks, we're mostly being advertised to or manipulated about things that relate to advertisement.
EW (00:16:52):
And...I mean, that's not awful. I'm not going to say that's terrible until my medical history is used against me. It's not a big deal, but that is distinctly not true for a lot of people. There's a darker side to this manipulation. Your book covers some of that.
AH (00:17:14):
There's a darker side. Yeah. And actually I would push back a little bit. It's not just advertising. It's also middle-class. If you have kids that you want to go into college, you might actually be in the positive or the negative, right?
AH (00:17:35):
The datas that are being used for college applications to determine who goes in, are much more being used in AI. If you're going for your next job, a lot of the recruitment tools and a lot of the filtering tools, irrespective of your economics, are being used based on past data.
AH (00:17:52):
As you get older, I'm sure, as we get into, at least in the U.S., social security, and...that age of 65 or 67, I don't know what it is, right? That may move because the AI has figured out that people are living too long, that we've got to make it 78 at some point. And here's all the data that supports that.
AH (00:18:13):
...It's not just advertisement. Today. But the negatives are, it's also being used in surveillance. It's also being used in predictive policing. It's also being used somewhat in the healthcare system. It's being used in applications related to facial recognition.
AH (00:18:33):
It's being used in language, and language models, and the biases, and natural language processing that is in our Siri and Alexa. It's being used in these applications that are in some cases harmful, because they're not trained with all of the representation of what makes us people, and human, and non-Western versus Western, and all of these aspects.
EW (00:18:57):
Some people say it's just the data sets, that we just need broader data sets. We need ones that show more people, that have more voices. Is that enough?
AH (00:19:09):
No. So that's just one piece of the puzzle. And I will say, 10 years ago, maybe 7 years ago, we were all talking about, "We just need more data. We just need more data." But the fact is, it's not just the data.
AH (00:19:23):
We do need more representative data, but it's also the way that we code up the algorithms. The parameters that we select, they have developer biases. It's about how we choose the outcomes. What are we measuring? What are we comparing? Are we trying to figure out loan rates? Are we trying to figure out the amount of the loan?
AH (00:19:44):
Are we trying to figure out neighborhoods? Someone has to select what we're learning from the data. There's human biases in that. And even the data itself, how it's coded, because typically right now there are human coders that label the data.
AH (00:19:57):
If you think about facial recognition, happy, sad, here's a face, here's not a face, there's biases in the human coders. So the data, as well as the labels. And really, we're now discovering, and again, 10 years ago, I don't think all of us were talking about this, but it's throughout the entire pipeline.
AH (00:20:16):
It's not just one place, which is the data. Although, having better data would be nice. And so it's one of the easier things, quote, unquote, to address, because you're like, "Oh, data, data, data. We could fix that." But it's throughout the pipeline.
EW (00:20:33):
I think that's a very important thing to consider. I mean, when you said that in the book, I was like, "I'm working on a DNN, just a standard neural network. I'm not even doing anything really creative with it. How can I possibly have bias in what is just a standard off-the shelf-network?" But the bias can come in on what I decide is good enough.
EW (00:21:06):
If I don't test it under the right conditions...I test it on my desk. I test it with me. And...the testing is part of it. And not accepting when the error's,..."It can't read my pulse, then it's broken." But if I don't have it check other people's, and not just the four coworkers who look like me, I won't know it's broken.
AH (00:21:40):
Right.
EW (00:21:40):
So it's the algorithm, it's the data. And it's the testing.
AH (00:21:46):
It's all of it. One example, I heard this as an example on cameras back in the day, how can cameras be racist, right? It's a hardware device. Well, back in the day, if you imaged anyone who had darker pigment, everyone who had darker pigment, they all looked the same.
AH (00:22:06):
So if you were, say a dark-skinned woman versus a lighter-skinned, black woman, you all looked the same. It's because the way they chose the range of colors and the optics, it excluded basically the range of darker skin pigments. So someone selected that, right?
AH (00:22:27):
And it wasn't changed until basically the candy companies, Hershey's, and the furniture companies, like, "We can't see our furniture. You can't see the beauty of the browns." And so they started fixing it. Not because of people, but because it was a commercial application that was like, "Oh yeah, we need to do this," right?
AH (00:22:46):
But that was a selection choice. And yet you're like, "Well, how can a camera possibly be biased against? It's just hardware, right? It just takes pictures." But a human had to make a decision about how it operated with neural networks.
EW (00:23:02):
And some of these are our past biases being fed to the future. I think there was...an AI resume reader that would reject women's resumes because they never hired women. So, you know.
AH (00:23:21):
Right. They're not good employees, right? It's like, "They're not good employees. They weren't hired. They didn't have great recommendation letters. They didn't, right? They didn't exist. So of course, we're going to exclude them."
EW (00:23:33):
I mean,...those are the things that are supposed to get better when we make them mechanized.
CW (00:23:41):
Why? Making things mechanized is -
EW (00:23:44):
They're supposed to be fairer.
CW (00:23:44):
- is just to be easy...I've never assumed something is fair because it's automated.
AH (00:23:54):
Good point.
CW (00:23:54):
I'm curious, no -
EW (00:23:56):
I think it should be because if it was -
CW (00:23:59):
It's an emotionless computer, to some extent? Ah, okay.
EW (00:24:01):
It's an emotionless computer. It should be judging based on the merits -
CW (00:24:05):
Yeah, yeah.
EW (00:24:05):
- not on the things it shouldn't be able to tell, like gender, because that's not on my resume. But you can tell my gender based on my name.
CW (00:24:14):
But...you can't make something know what merit means without bias if you're teaching it that, right?
EW (00:24:20):
If it doesn't understand anything.
CW (00:24:22):
No, I mean, it's a human making this. I mean, a human's making a copy of a human, right?
AH (00:24:28):
Basically, so what's going on?
EW (00:24:30):
A bad copy.
CW (00:24:32):
Well, maybe, maybe not.
AH (00:24:33):
Yeah.
CW (00:24:33):
Maybe just a really good copy of a normally-flawed human being.
AH (00:24:38):
Yeah. So I think one is, these machines, just because they're automated, aren't fairer. But I do think that we can make them fairer, right? I do think that we can, and I wouldn't say remove bias.
AH (00:24:52):
I truly don't believe we can ever remove bias as long as people exist, but I think we can mitigate it, right? I think we can reduce it so that the machine is as good as the best unbiased quote, unquote, human, right? I think we can do that.
EW (00:25:08):
But to remove the bias, step one is identifying it, isn't it? Or do you have a different path to get there?
AH (00:25:15):
No, you have to identify the bias. And that's why I said, I don't think we'll ever remove the bias, because we all have biases, right? We all have biases.
AH (00:25:24):
And it might be a bias with respect to religion. It might be a bias with respect to social economics, with respect to gender, race, ethnicity, age, right? It doesn't matter. We all have some bias. And therefore, if any of us are involved, that bias is going to creep in, because we don't even know that it exists sometimes.
EW (00:25:42):
It's like being a fish in water.
AH (00:25:46):
Yes. You don't realize you're in the water.
EW (00:25:49):
...This is very much a human problem and not as much a technology problem, but how do we fix this?
AH (00:25:58):
So the way that I think about fixing it is twofold. One is, as a developer, because I'm a technologist, I'm a developer. I think we need to rethink the concept of what we call it's development, right?
AH (00:26:12):
So typically, it's coders, it's technologists that are working on the technology, and then it's thrown out in terms of, "Okay, here's the application. You folks take it." I think when we think about development, just like if you think about movies and films, right? The movie crew,... everyone is there, thinking about it, scripting it out, creative.
AH (00:26:32):
You also have the filmmakers. You also have the technologists that are part of the team. And so I think we need to rethink how we design these systems such that the team is not just developers and technologists. The team is the social scientists. It's the ethicists. It's also the coders and the developers. It's also the human factors people.
AH (00:26:51):
And that is the team. And the team does not produce a product, unless everyone is represented that can understand these things. That's one. The other thing is...I think,...and this is why I did the book, is I want people to feel empowered, to push back on the technology that they're using and demand stuff.
AH (00:27:12):
And basically, I've seen, and I give an example, everyone wants to go green now, right? Everyone, sustainability, go green. These companies, zero emissions, zero emissions. This is actually very, very expensive for a company, right?
AH (00:27:28):
This is not a cheap thing, you just turn on a switch, and like, "Oh yeah, we're zero -. " No, they actually have to put in a lot of funding, a lot of effort, a lot of strategy. And at the end of the day, it's not like they're selling more products, right?
AH (00:27:40):
But there was enough of people that...pushed back, like, "We need to go green. We're not going to sell. We're going to go to these other companies." And companies finally said, "Oh, you know what, maybe this is something we should focus on." And once you have a few do it, then everyone else comes on board.
AH (00:27:56):
And I think that we as a community, have that power to say, "Look, we need to make sure that we have unbiased technology. We have unbiased algorithms. We're going to do it, else we're going to go to these startup companies."
AH (00:28:08):
"And we're not going to use company X, company Y, company Z's search engines, and we're going to have a movement. I think we can change the world. I can think we can change these companies.
EW (00:28:19):
I love the analogy because it does give hope that things can change and that consumers can push some of that change. There are companies that do green certification, and there are standards now.
EW (00:28:34):
Do you think we'll be seeing racial bias, and gender bias, and religious bias, and all other biases standardized? I mean, I guess we don't standardize the biases. We standardize the non-biases?
CW (00:28:55):
What we need is an AI to detect bias.
EW (00:28:57):
I believe there are some.
AH (00:28:59):
There are some. I actually think that we are going to start seeing a lot more third-party certification companies that will come in and certify, right? "I have a new product. I have a new AI algorithm. I'm going to petition/contract this company that's a third party to come in and do my audit," right?
AH (00:29:22):
We do this for our finances all the time. "Third-party, come audit my financial files, because else SEC might get on me if we don't have a third-party auditor." This is a fact. And so I see that happening more and more
EW (00:29:40):
You said SEC, the Securities Commission, but I immediately went to FCC with the radios, because that is more what I think about when I certify things. But I could see...exactly what you're saying. I would go to a company, and get audited, and they would check for all of the things that I don't know how to do for accessibility and fairness.
AH (00:30:08):
Exactly.
EW (00:30:08):
Is fairness the right word?
AH (00:30:12):
So, no one has converged on the actual word. So there's fairness, accountability, ethics -
EW (00:30:22):
Equity.
AH (00:30:22):
- I mean, equity. I mean, there's a hodgepodge of words that the community is still kind of working through, figuring out,...transparency, explainability. There is no convergence yet of what it is that we're talking about, because even fairness, fairness with respect to whose criteria, right?
EW (00:30:44):
Exactly.
CW (00:30:44):
Yeah. That's what I was about to ask, because it makes me slightly uncomfortable to think about certifying this, not just not be done in terms of the goal, but it could be a tremendous amount of power to that organization, right?
CW (00:30:59):
And who watches the watchers at that point? I know it's kind of a glib response to that, but there is some discomfort in my mind, "Okay. If we standardize this, how are we to know that the standard isn't biased or could be misused?"
EW (00:31:13):
Oh, it will be.
AH (00:31:15):
Yeah. Yeah, it will be...I don't think it's the standard of quote, unquote, what is fairness, and what is bias? I think it's the standard process of how do you assess an audit.
CW (00:31:30):
Yeah, okay.
AH (00:31:30):
Which is two different things.
CW (00:31:32):
A very open sort of process.
AH (00:31:34):
Right, right.
EW (00:31:35):
And so everybody knows the rules.
AH (00:31:37):
Yeah. Everyone knows the rules, right? They know what they have to do. Like for FCC, right?...Accessibility, you know you have to think about it. How you make, say your website, or your language, or if you're recording, accessible is going to be different depending on the medium that you're using, right?
AH (00:31:58):
If it's the image space, or the sound space, or, right? Accessibility is going to have a different process, a different tool, depending on your medium of expression, of communication.
EW (00:32:12):
And there are likely to be different levels. I mean, when we started the podcast, we were audio only. And then about a year ago we started doing transcripts, because I realized how important that was for accessibility.
EW (00:32:25):
And that would have given us another -3 or -5 on some standards board. It's not like we'd have to check everything off on the first pass.
AH (00:32:39):
Correct. Correct.
EW (00:32:41):
Christopher really doesn't like this. He's totally making faces.
CW (00:32:44):
No, I don't dislike it. I've never really thought about it before, so I'm wrestling with it a bit. I think it's great if it's something that people put out there, and, "We're certified this way," and that leads to becoming more accepted than another company that doesn't.
EW (00:32:58):
Like fair trade chocolate.
CW (00:32:59):
Sure...I don't know that anybody's suggesting this, but I don't want people coming in and saying, "You can't exist because you don't conform to - "
AH (00:33:10):
Well, let's talk about this with respect to the FDA. Okay. Think about it. I want to create a drug or a medicine. I can't just go into my garage, and put some chemicals together, and be like, "Yeah, here's something. I've been trying it on my dog. Look at that. It makes them healthier. I'm now going to sell it -"
CW (00:33:31):
Sure.
AH (00:33:31):
"- to people."
EW (00:33:32):
I mean, you totally can, for a little while.
AH (00:33:36):
Right. And then you can go to jail. If anyone dies, you definitely go to jail. So what is the FDA? The FDA is a third-party agency, right? There are some processes. You have to collect data, you have to talk about, even from your concept, it's not "like, "Oh, here's my drug. I'm going to go to clinical trial three." No, no, no.
AH (00:33:57):
From day zero, as you are putting your chemicals together, you have to record it. You have to discuss it. You have to talk about the processes, when you actually start your trials, or your pilots, "Who was part of your pilot study? How did you find them?" Right? "Were they coerced in any way? What was the lab like?" Right?
AH (00:34:16):
And there's all of this paperwork, there's all of these things that the FDA requires. And then there's levels of harm, right? As in a class of device one, or two, or three, right? And depending on the level of harm, it's associated with how you have to document your processes and the things that you need.
EW (00:34:34):
But then there are things that aren't FDA-approved. You can buy a whole grocery store's worth of stuff that the FDA didn't approve because -
AH (00:34:43):
Exactly.
EW (00:34:43):
- they don't cause harm. And so maybe surveillance systems get anti-biased approved, but I don't know, our cold medicine doesn't.
AH (00:34:56):
Right. Or the app that selects your music, right? Because there is some bias in the music, because it can recognize you based on your past preferences and associates you with others, right? But maybe that is like, "Yeah, that's so not harmful."
EW (00:35:12):
That's okay. That's a bias that I accept as opposed to police profiling would be a bias I don't accept.
CW (00:35:21):
No, it's tricky. No,...I wouldn't want to put a music company through, if we're using the FDA as an analogy, I wouldn't want every software company to have to go through an FDA-like process, even at low-level of concern, because one, they can't scale. There's way more software companies than there are drug companies.
AH (00:35:42):
Right. But even the FDA, for example, with AI, exercise is actually not regulated by FDA unless the exercise information is being fed to your doctor who's then using it, right?
CW (00:35:58):
Right, right.
AH (00:35:58):
There's some criteria.
CW (00:36:00):
Yeah.
AH (00:36:00):
And so I would say it's the same thing. If you're creating a music app, it's fine. But if your music app is then feeding into surveillance -
CW (00:36:06):
Yes. [Laughter].
AH (00:36:06):
Well -
CW (00:36:09):
Yeah. And I think it all comes back to what we were originally talking about with privacy, and where the data is going, and how it's being used, right?
CW (00:36:19):
And that's perhaps the root of it is, "Okay, this data exists...Is it being used in an innocuous way, like you're saying, for exercise, or over-the-counter supplement, or is it being used in a directed way toward surveillance, or controlling population behavior, or something?
EW (00:36:41):
It's so easy to cross the line now.
CW (00:36:42):
It is. Yeah. That's why I'm struggling with this. Yeah. It's a really thorny problem. And I think...it's thorny and new, right?
AH (00:36:53):
It is new.
EW (00:36:53):
Oh, I don't think it's new for some people.
CW (00:36:55):
Well,...before we had AIs...doing these things we had actuaries -
EW (00:37:00):
Actuaries.
CW (00:37:00):
- who were saying, "Well, people tend to die later, so let's move this over and ,-
EW (00:37:05):
Insurance. Yeah.
CW (00:37:05):
- or behaviors look this. So the biases were always there. They were just in human AIs.
AH (00:37:12):
Human form.
CW (00:37:13):
Or natural intelligence, I guess? Yeah. Yeah. Wow.
EW (00:37:20):
When do you think we'll get there?
CW (00:37:24):
Where's there?
AH (00:37:26):
Well, yeah, where's there. I think right now we are at the crossroads where a lot of things are happening, because the systems are being used now in scenarios that could cause harm, right? And so I think the conversation is happening. Government is being involved.
AH (00:37:48):
If you look at the number of potential bills that might be coming through, it's happening. And the question is what is going to come out of it, and are there going to be best practices or not? Who's going to control it? Will be government-controlled? Will it be civil liberty? We don't know, but it's happening now. Yeah.
EW (00:38:16):
Are you excited or horrified?
AH (00:38:17):
A little bit of both. I'm excited, because this is my area. I've been doing research in this field for quite a number of years. I'm horrified, because there are no answers, and I feel like there's not enough people that are concerned about it that should be.
CW (00:38:38):
...What's your preferred approach to go after kind of the, I don't want to say low-hanging fruit, because that makes them sound less important, to go after the big things like police surveillance, and predictive policing, and that kind of thing first?
CW (00:38:58):
Or is it better to kind of work toward a generic framework that can be applied in all sorts of levels? Or to do both at the same time? I don't know.
AH (00:39:08):
So I would do both, only because the big things, like surveillance, predictive policing, are now, right? These are systems that are being used now. And there's no real box around their use, how they can be used, evaluation or auditing of the biases that might be present, right?
AH (00:39:31):
That doesn't exist, and yet they're being used. And so...those issues need to be addressed now, because they're being used now. It's basically like, "Oh, we sold all the nuclear weapons. Oh man, do we have any rules on how they're supposed to be used? No? Okay,...I think we might - "
CW (00:39:55):
Right, right, right.
AH (00:39:55):
Right? That's where we are. But then we also need -
EW (00:39:57):
Maybe we should write the manual.
AH (00:39:58):
Right, yeah, I think we need to write the manual, because there wasn't a big red button. Oh my gosh, we forgot the big red button, right? That's where we are.
CW (00:40:09):
At least with nuclear weapons, you can't just copy them with a disk drive, and then ship more of them out on the internet. [Laughter].
AH (00:40:16):
[Laughter]. Yeah. That's the good thing. Yeah, but in the meantime, we also need to think about how to do this more strategically, so that we aren't in this scenario where we're like, "Oh my gosh, what's going on?"
CW (00:40:29):
Besides academia, or maybe academia is the best place, but what are the organizations that are really kind of driving this conversation?
AH (00:40:40):
So some of the non-profits are Partnership on AI, AI Now Institute. So these are non-profit organizations that are coming up with reports, coming up with best practices, looking at what companies are doing, trying to convene groups together to have discussions. But...it's a drip in the bucket.
EW (00:41:09):
We've talked a little bit about privacy, so maybe this is related to that. But is this going to be like curb cuts? Where they fought for so long because our curbs were perfect, and then somebody comes along, and Berkeley says, "No, we're going to make ramps, because people need ramps who are in wheelchairs."
EW (00:41:29):
And then everybody realizes they always wanted them. Is it going to be like that, where once we have the rules, everybody's going to like, "Oh yeah, this was always better. Why did we fight it?"
AH (00:41:42):
I think so. Because one of the things is,...just like with accessibility, as we know, when you make something accessible, it actually makes it accessible to a wider range of individuals than you thought you originally were targeting. I think it's going to be the same thing.
AH (00:41:59):
When we think about quote, unquote, mitigating bias, it also means that we are mitigating bias for a larger group of individuals that we didn't even think about. So it just makes it good. It makes it good practice.
AH (00:42:13):
I always think about Siri and Alexa. I don't use voice recognition just because I like control, but voice recognition was not just to help us interact with our devices, right? That was an accessibility feature that was created and started.
AH (00:42:32):
And then people were like, "Oh, this is actually kind of useful. Maybe we should expand and put a little bit more money, maybe people will use this." And now it's kind of a standard.
EW (00:42:44):
For a lot of people, the biases work for them. For a lot of people, the biases are biased in their favor. How do we convince people that's not okay? It's not okay to hoard that, to defend it.
AH (00:43:01):
Because the bias will come after every single person at some point, right? And so it might be that today you're on the side of being on the advantage. But the fact is, is we all have something that is different than the majority.
AH (00:43:22):
Something, whatever it is, it could be that you like fries and mayonnaise, right? Maybe it's something minor like that. And then all of a sudden it's like, there's no more mayonnaise in the world because no one likes it with fries.
AH (00:43:35):
And so the fact is, is we all have one thing, at least one, some of us multiple things, that is different than the majority, because that's actually what makes us unique, and human, and different.
AH (00:43:46):
And what that means then is that at some point you are going to be the target of the bias. Guaranteed, a hundred percent, no doubt. And therefore, wouldn't it be nice if you kind of fixed it now?
EW (00:44:00):
Age bias is one of those things.
AH (00:44:03):
[Inaudible]. That's right.
EW (00:44:03):
If we're lucky we experience it, but when we experience it, it's not pleasant.
AH (00:44:09):
There's one. [Affirmative].
EW (00:44:10):
Does this come out of your robotics research? And if so, how?
AH (00:44:17):
Yeah, so, because I interact, and with my robotics, I do human-robot interaction, I interact with a lot of people, a lot of differences of people. And early on, I did notice that the types of learning that my robots were doing tended to shift toward different populations.
AH (00:44:38):
So I worked with kids. When I took my algorithms and interacted with older adults, they didn't work as well. I interacted a lot with children with autism, and it just so happens, boys had a larger incident rate. And so I started noticing that when my robots interacted with girls, there was a slight difference, right?
AH (00:44:59):
And so it just came from me experimenting, and working with the robots, and working with people. And I realized that this was a bigger problem than just my own lab research.
AH (00:45:11):
That as these systems were being deployed to billions of people outside in the world, that these biases that I was recognizing were really going to perhaps derail our entire society if we didn't address it.
EW (00:45:28):
Your research has also been largely about researching trust, how people trust robots. Do you think people will trust unbiased robots more? Or do you think humans aren't that smart?
AH (00:45:51):
People will trust robots whether they're biased or not. They will trust the intelligent nature of these devices, and the decision-making processes, and the fact that these AI systems basically mitigate our need to have to work. It's actually an energy conservation function. For us.
CW (00:46:16):
Yeah.
EW (00:46:17):
I think you need to unpack that for me.
AH (00:46:20):
Yeah, so what happens is, is when we are in a system, when...working with an automated system, robot, or AI system, what happens is, is that we can go into reactive mode. Being reactive, like, "Yeah, sure. Yes, yes, yes, yes," is actually easier for us in terms of the energy we expend in terms of thinking than us having to process.
AH (00:46:49):
And so if we are in a scenario where we're interacting with a machine, and it seems to get it right most of the time, we actually go into this energy conservation mode, i.e., "I'm not going to necessarily think about this task, because the AI knows what it's doing, and therefore I can exert my energy on other things," like breathing, right?
AH (00:47:11):
And so what happens is, is that we go into that mode very quickly when it comes to AI, when it comes to robots. Very, very quickly, because we have this perception that they know what they're doing, and it's actually very hard to break that as well.
EW (00:47:31):
I mean, this is the AI cars, and people seeing they work well on freeways, and then stop, and then start playing with their phones, and forget that they are in charge.
AH (00:47:44):
Right. And watching movies and reading books, and yes.
EW (00:47:47):
All these things they tell us not to do, but the car works almost all the time. Why should I pay attention?
AH (00:47:55):
Right. I'm logging my hours, right? It's good. It's really good.
EW (00:48:01):
That's not something we're going to fix. Humans are always going to trust things that work most of the time, but we shouldn't....I don't know what the fix is for that.
EW (00:48:14):
I know I just said it's not something we can fix, but I feel like it is something we need to, do we need them to give us their confidence number? I mean, I guess that would be a probability and I would like it, but I suspect most people wouldn't.
AH (00:48:28):
Yeah. So, I've been kind of thinking about this in my own research, and some of the solutions we're exploring have some ethical consequences, right? Should an AI have a denial of service?
AH (00:48:42):
Because again, remember we can model people, we can model their behaviors, which also means we can model when they're in an overtrust mode. Should, for example, the car, next time you put it in autonomous mode, it goes like, "Nope, sorry, last time you totally overtrusted me. So this time no autonomy for you," right?
CW (00:49:01):
Well, that's literally what Tesla's doing manually right now. I read about, they've got this full self-driving beta going that people are using, and they're monitoring people. And if they detect that you've been nodding off or not looking at the camera enough times, they delete you from the program.
AH (00:49:16):
Denial of service. Right.
CW (00:49:18):
Yeah. So I could totally see that being a thing, but that's going to make people very angry, right?
AH (00:49:23):
Right, right. So there's this thing is...how do you balance that? How do you do it so that...now you're losing the human autonomy, the humans feeling that they are in control. But then you wake them up, right?
AH (00:49:45):
You're like, "Oh man, yeah, you're right. I did just kind of nod off and didn't pay attention, so this time I'm going to do right." And then of course, a week later, denial of service again, because as you said, we are human.
CW (00:50:02):
How much are we trending toward kind of the Asimov world here?
EW (00:50:07):
The Three Laws or something worse?
CW (00:50:09):
Well, I mean, he wrote a lot of things that kind of touched on morality and robots and stuff, but I do feel like we're starting to kind of engage with that stuff in reality.
AH (00:50:17):
Yeah. We we're getting closer. Because the systems are, I won't say becoming, they are integrated now quite well into our lives, and that's just going to continue and accelerate.
EW (00:50:36):
Well, when I went to school, when I went to college, there was no ethics course. I even went to a college that is very humanities-heavy for a tech school, but I think there are more ethics courses happening now...Do most undergrad CS or engineers now have an ethics course?
AH (00:50:58):
Most places do. I would say a majority of computer science, majority of engineering programs, have some form of an ethics requirement. Sometimes it's...a full course, sometimes it's a thread integrated through courses. There's different ways of how it's done.
AH (00:51:22):
So yeah,...I mean, at the institution, university level, there is movements about how to do this, so that it's much more integrated into the curriculum. So that it's just not a course, right? Where it becomes part of the student's DNA. But...the conversations are happening.
AH (00:51:44):
We're going to start seeing movement in the education space over the next three to five years. So that the next generation of computer scientists, engineers, I think will have the tools to think about these problems and have the tools to identify that they don't know everything, right?
AH (00:52:03):
And so they need to build and have teams that represent the community in all these aspects.
EW (00:52:12):
You're teaching college students, you're teaching 20-somethings that they don't know everything? How is that possible?
CW (00:52:17):
Some of them are under 20.
AH (00:52:17):
Yeah, yeah.
EW (00:52:21):
But more seriously, what does an ethics course consist of? I mean, teaching them they don't know everything, and that they need to be part of a team. Is it about, "Don't do things that are going to make the world worse. It's okay to say no?" What else is in an ethics course?
AH (00:52:40):
Yeah. So, at Georgia Tech, I taught, and it's actually still being taught, but constructed and taught an ethical AI course. And so the way that I approached it is, we would go over, I would use example word embeddings, which is a methodology in natural language processing, which has some biases...
AH (00:53:00):
Men are to doctors as blank is to nurses, right? All women are nurses, and all men are doctors kind of stuff. And so we go through this, and what they have to do is, is that one, they have to assess some of the algorithms that are out there and come up with these, "Firefighter, what's a firefighter, man or woman?"
AH (00:53:23):
Well, most word embeddings will say it's a man, right? Police officer. So they go through this entire exercise. So that's the awareness. And then what they have to do is, is they actually have to develop solutions to remove some of these biases so that there's not this type of association.
AH (00:53:43):
...What they have to do is,...and this is just the word embeddings, and then, they have to find a data set that's out there, and do a full kind of analysis that's separate from what I teach them, to actually basically do an audit, as an example.
EW (00:54:03):
Teaching the auditing skills,...that's pretty cool.
AH (00:54:07):
Yeah. Well,...there's different tools that are out there that I pull from. There's actually tools out there that, you can look at disparate impacts, you can look at different outcome measures, based on data sets, based on your algorithm.
AH (00:54:23):
...Now, they're techie tools, so they're not like anybody can just be like, "Oh yeah, sure. I'm going to compile this Python script and run it." But they do exist.
EW (00:54:35):
You spoke about these students having a project in the ethics class where they could monitor the data they found for biases. And then we were also talking about making some sort of agency or auditing system to look for biases. Are these connected? Are there tools coming?
AH (00:54:59):
So there are some tools that are out there...There's a tool called Model Sheets [Model Cards], for example, that Google has, that people can use. There's tools called ABOUT ML, that comes out of the Partnership on AI. IBM has a set of tools that you can use to do basically auditing assessment of bias around different measures and metrics.
AH (00:55:26):
The problem is, is that they are designed for technologists. So there is not a simple, "Oh, I can just pop in and double-click an app, and voila, everything's like magic."
AH (00:55:38):
It actually requires a little bit of intellect to figure out how to use them, but they're there. And I teach my students how to use them for a bunch of different types of applications. AI applications.
EW (00:55:51):
That's kind of reassuring. Okay. So I want to change subject. You left Georgia Tech after 16 years...You're going to Ohio State? Why?
AH (00:56:03):
Yes. Well, one, I'm bittersweet. I really do love Georgia Tech. It was my home. I still think of it as my academic home. But going to the Ohio State University as Dean of Engineering was an opportunity I could not refuse. One is that the institution is really leaning into kind of three things that I'm passionate about.
AH (00:56:29):
One is ensuring that the student and faculty population look like the city of Columbus, look like the state of Ohio, look like the U.S., and providing resources to do that. Because in engineering, we make things for everyone, and therefore engineers should look like the community that they're making things from.
AH (00:56:46):
Which is, again, something I'm very passionate about. The other is around innovation. Basically looking at emerging technologies, and how do you design for the future in a responsible way? So thinking about responsible engineering around medicine, around 6G, around artificial intelligence.
AH (00:57:04):
Which again, I'm like, "Oh, wait, hold on. I worked in all those areas." I'm super, super psyched about that. And then the third, which is about making education and engineering accessible...
AH (00:57:15):
In fact, President Kristina had announced she would like to have debt-free, all students would graduate from college, as undergrad, debt-free, right? That makes engineering accessible to basically anyone who wants to be an engineer. Like, "Come here."
AH (00:57:36):
And as an engineering college, though, we have to ensure that we have the scaffolds to ensure that every student that wants to do engineering can be successful, because their high school may not have had computer science or may not have had calculus.
AH (00:57:49):
They may not have had these things that we require to kind of start off. And so what can we do as a university to make sure every single engineering student is successful? And so I'm excited about thinking about that. And all these three things means that it can be a national gold standard that other universities could look at to do it the right way.
EW (00:58:10):
But what about the robots? Are you going to miss them?
AH (00:58:14):
Oh, I still have a research lab. I'm always a roboticist.
EW (00:58:20):
Oh, good.
AH (00:58:20):
Right? Roboticist never changes. All of my jobs, I've always said, Dr. Ayanna Howard, roboticist, and then there's the title. So I will always be a roboticist. I will always build, design, and program.
EW (00:58:33):
Are there new sensors or systems that make you excited about the future of robotics? What should we be looking for?
AH (00:58:40):
I still kind of enjoy humanoid robots just because they're really fun to work with...There hasn't been a real difference...There's Pepper,...I have it at Georgia Tech. I'll buy one at Ohio State. That's kind of the funnest one I still have.
AH (00:59:00):
The difference is, is that the algorithms are much more powerful, right? We can design much more powerful algorithms that can take advantage of the hardware to do more interaction with people, language recognition in terms of the image space and behaviors.
EW (00:59:17):
I had a couple of listener questions I want to get to. The first is one I suspect you get a lot. "What's a piece of advice you'd give to aspiring robotics engineers who are just starting out in the field?"
AH (00:59:31):
...My one big piece of advice is be curious and explore. So one of the things about robotics and engineering, computer science, it really is this exploration process to find solutions to problems that people haven't yet solved. I mean, that's the ultimate goal.
AH (00:59:50):
And in order to do that, you have to be okay with being curious, and...things aren't going to always work exactly the way you thought, but that is part of the exploration process. So really lean into that and be okay with it.
EW (01:00:05):
You have a distinguished career. Paul K. wanted to know if there was a time you came across a fork in the road, and can look back, and think that was important, and why choose one fork over the other.
AH (01:00:21):
I did. So I would say about five years ago, I had the opportunity to go into corporate, because, AI robotics, even now, it's really, really, really popular. And a lot of companies were starting to poach academics into their fields, and becoming CTO and things.
AH (01:00:42):
So pretty nice, lucrative kind of positions. And I really had to think about what it was that I wanted to do. And I felt that my ability to basically have the freedom with respect to my research, the freedom to basically talk about change, was more important at this stage than the other opportunity, just in general, and I chose right.
AH (01:01:10):
Because I do have a national stage. I can talk about bias. I can talk about fairness. I can talk about AI, and ethics, and robotics. And I can point fingers even to companies, and even my own institution.
AH (01:01:24):
And that's my job. That is my function. That is expected of me. And I wouldn't give that up for the world. I didn't know that back then, but now I realize it was the right choice.
EW (01:01:35):
It's a tough choice, I bet. I mean, lucrative versus academic.
AH (01:01:41):
Oh, very, times 10 salary-level lucrative. But I also realize that what brings me joy is really that impact in terms of society and having that direct impact with people that I know I have now.
EW (01:02:01):
It's about making a difference.
AH (01:02:04):
Exactly.
EW (01:02:07):
You are an inspiration to many. Your career path was actually pretty linear, given, I don't know, you're black, you're a woman, you're in technology, and sometimes it doesn't go linearly. Was there a time when you thought, "No, I don't want to do technology. I'm done."
AH (01:02:34):
There was never a time I didn't want to do robotics, but there was definitely times when I didn't know...the path that I was supposed to follow. For example, I have a PhD. I, a couple of times, was like, "Why am I doing this PhD thing," right? "I can do robotics without a PhD, so why should I continue this path?"
AH (01:02:55):
So I had a bunch of those kinds of moments, but robotics, I always wanted to do. And I didn't know what that was, right? I think I would have been perfectly happy if I was in the garage building robots, and going to the grocery store, and working there, right? And I would be like, "I'm a roboticist," right? "Because I'm building robots."
AH (01:03:13):
I think that was the one thing I never could see myself giving up. It was the jobs, that sometimes it's like, "Maybe I shouldn't be here. Maybe I should be someplace else." And mostly it was because of culture, environment, microaggressions, the feeling of not belonging many times from others, some intentional, some unintentional.
EW (01:03:34):
What do you want people to take away from your book?
AH (01:03:39):
So the big thing I want people to take away from the "Sex, Race, and Robots" book is that we all have a responsibility in this world of artificial intelligence, whether you are a developer, a technologist, a consumer, or just a person that lives in a house somewhere in the middle of the Sahara or the forest.
AH (01:04:03):
We all have a responsibility, because if we don't, then the decisions are going to be left up to a very small number of individuals that might not necessarily reflect your individual interests, and the things that you want for your home, your family, your community.
EW (01:04:21):
Do you think it was important to put so much of yourself, of your past, your memories, and memoirish-ness into a book that is about technology?
AH (01:04:35):
I did. And I debated about that, but when I looked at the books that were out there that talked about AI, and robotics, and bias, a lot of them were removed. And so I felt that by weaving in my own personal story, when people can relate to my story themselves, then they can also relate to the AI and the issues, right?
AH (01:04:58):
So it's more of a, I'm putting you in my place, because all the stories, there's at least one that people are like, "Oh yeah, I remember going through that." And then as soon as I grabbed that kind of sentiment, and feelings, then it's like, "Oh, then this other stuff that surrounds it, it's got to be important, because now I can relate."
EW (01:05:18):
It is incredibly effective. I like that you did that. When we talk about ethics, and bias, and AI, and race relations, and all of that, it can be dry, and hard, and difficult, even if it's good. But the personable-ness of your book, I really appreciate it. So thank you.
AY (01:05:39):
Thank you. Thank you.
EW (01:05:40):
Do you have any thoughts you'd like to leave us with?
AH (01:05:44):
Of course. Read the book. Listen to the book. But no, I think,...you had asked the question, are we closer to the singularity than we were five years ago. I think we are, but of course, that could be a million minus five years.
AH (01:06:02):
But what I want people to really think about is they don't need to be fearful or afraid, or think AI is this all-knowing thing that we have no control over. We have direct control.
AH (01:06:17):
AI is learning from us. It's learning from our behaviors and our biases. And if we can control ourselves and our biases, then we can definitely control AI.
EW (01:06:27):
Our guest has been Dr. Ayanna Howard, Dean of Engineering at Ohio State University, and author of "Sex, Race, and Robotics" on Audible. There will be links in the show notes for that book, as well as a link to the transcript for her previous episode, and a link to that show.
CW (01:06:47):
I just wanted to add that there's a lot of pessimism out there, and it's great to hear your viewpoint and your optimism come through. And I think it's helpful for a lot of people to think about these issues in a more optimistic frame. So thank you.
AH (01:07:00):
Thank you.
EW (01:07:02):
Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener Slack group for questions: Chris L., Sahil, and Paul K. And thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.
EW (01:07:19):
And now a quote to leave you with. From Ayanna Howard's book, "Sex, Race, and Robots: How to Be Human in the Age of AI." "We've all become anomalies in the world of AI, but we have the power to triumph. If we open our minds and embrace the differences that make us human, we have a chance of preserving our humanity in the age of AI."