187: Self-Driving Arm

Transcript from 187: Self-Driving Arm with Patrick Pilarski, Elecia White, and Christopher White.
EW (00:00:06):

Hello and welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Patrick Pilarski. I think we're going to be talking about machine learning, robotics, and medicine? That's gotta be cool.

CW (00:00:25):

It's your show. You should know.

EW (00:00:26):

I should know.

CW (00:00:28):

[Laughter]. Hi Patrick. Thanks for joining us today.

PP (00:00:30):

Hey, thanks so much. It's great to be on the show.

EW (00:00:32):

Could you tell us about yourself as though you were introducing yourself for a panel?

PP (00:00:38):

Sure. I never like introducing myself on panels, but in this case [laughter], I'm a Canada Research Chair at the University of Alberta here in Edmonton, Canada. And I'm specifically a Research Chair in Machine Intelligence for Rehabilitation and sort of putting together two things you don't usually see in the same place. A lot of what I do is working on connecting machines to humans. So bionic body parts, artificial limbs, and other situations where people and machines need to interact. So we look at making humans and machines better able to work as a team.

CW (00:01:11):

So you're working on Darth Vader.

PP (00:01:12):

That's right.

EW (00:01:13):

No, he's working on The Six Million Dollar Man.

PP (00:01:15):

Yeah. That's the more positive spin on things. [Laughter]. Yes. We're definitely working on...well, hopefully it's not 6 million. The health system wouldn't be too thrilled with a Six Million Dollar Man but -

CW (00:01:23):

[Laughter]. Now it would be six billion anyway.

PP (00:01:23):

A little lighter price tag might be good, yeah.

EW (00:01:30):

This is the point of the show where normally we ask you a bunch of random questions, but the lightning round is on vacation.

PP (00:01:37):

Oh no.

EW (00:01:37):

So we only have one question to get to know you. And that is, who would you most like to have dinner with, living or deceased, and why?

PP (00:01:50):

Well, I think the most immediate person I'll have dinner with is my dear wife later on this evening. And that's actually the one I like having dinner with the most...So the tricky thing is I don't like most. I'm not a guy that does most or best, but I will answer in a different way, is that I've been currently reading a very cool book from an author named Suzuki Bokushi. And Suzuki Bokushi was actually alive, you know, into the 1700s.

PP (00:02:14):

So long dead, I guess, in this answer. But he lived up in the really snowy bits of Japan. And so this is a very cool book, all these little snapshots of one of the snowiest places that Japan could imagine. And it's even snowier than here in Edmonton. So I'd love to sit down with him over a cup of tea or some kind of nice evening meal and chat about how they deal with all their snow, because...we've got some here, but wow. They get really socked up in certain parts of Japan. So...I think that would probably be the one I'd pick for now, just recently.

EW (00:02:44):

Alright. That is pretty cool. And so now I want to ask you another strange question. Do we have the technology?

CW (00:02:55):

[Laughter].

PP (00:02:55):

Do we have the technology?

EW (00:02:57):

Ah, okay. Sorry.

PP (00:03:01):

How so? How so?

EW (00:03:01):

It's a Six Million Dollar Man quote.

PP (00:03:04):

Ah, sorry. [Laughter]. I'm off my game today. I'm so sorry. Yes. Yes and no. Yes and no.

CW (00:03:11):

We can rebuild him.

PP (00:03:13):

We can rebuild him. [Laughter].

EW (00:03:14):

Bionics: prosthetics that are smart. This is crazy talk. I mean, that's just, that's very cool on one hand. On the other hand, buh bwuh? What do you do? I mean, what does this mean?

PP (00:03:31):

So "bwuh" is a really good reaction and it's actually probably the most common reaction, I think, when we start to say, "Hey, yeah, you know, you might have an artificial limb and it might be learning stuff about you, but it's actually not that crazy." So, I mean, if you bear with me a little bit, when someone's lost a part of their body due to an injury or an illness, they need to have sometimes assistive technologies.

PP (00:03:53):

They need technologies that are able to replace or sort of put back some of the function that was lost. And now...the really tricky bit here is that the more things you lose, in many cases, the more things you have to put back. The problem though, is that the more things that are lost in the case of an amputation or something, if you're losing an arm, you need to restore the function of the arm, but you have less places really to record from the human body.

PP (00:04:18):

You have less sort of windows into what the person really wants. And so a very natural way to start thinking about how you might want to start putting back that function, and understanding what the person wants, isn't even sometimes to be able to try and pry out more signals from the human body, but it's "No, why don't we just make the technology itself just a little bit smarter."

PP (00:04:36):

And then it can know things like, "Hey, you know, it's Thursday and I'm making soup." Okay, cool. I'll be able to fill in the gaps. I'll be able to sort of guess at what the person might want or what the person might do. And it makes it a little bit more seamless and a little bit more natural. So you can do more with less. This is sort of...without the thermodynamics police coming in and locking us all up.

PP (00:04:55):

I mean, we're really trying to get something for nothing. And machine intelligence helps us get a lot for a little, so a smart prosthetic hand helps us do more with less. I think that's the key thing. And so it sort of takes the "bwuh" to, "Oh yeah. Maybe that makes sense."

CW (00:05:09):

So it's more like a self-driving arm?

PP (00:05:12):

More like a self-driving arm. Exactly. And...this is actually very much, that's a good analogy because you can, a lot of the systems we think about that do stuff for us, you give really high level commands. You do sort of the big picture thinking and the technology fills in the gaps. We see this with everything from our smartphones to our computers to maybe someday soon, I hope someday soon, the vehicles we have even up here in Edmonton.

PP (00:05:36):

And the nice thing about this is that yeah, you could say, you know, what the self-driving arm is, you're giving the high level commands, but I mean, we just can't, in some cases, for the bionic body parts case, we can't even measure the right signals from the body. We can't sometimes get the information out for say, fine finger control to play piano or catch a ball. But the system could say, "Hey, in this situation, and you're making these big kinds of motions, I bet you want your fingers to coordinate in this kind of way." So you may be able to play that good lick on the piano.

PP (00:06:05):

So yeah, it's kind of like a self-driving arm, but...without the sort of scary, the bit that people always get scared about is sort of the Dr. Octopus side of things [laughter] where it's, "Oh, my arms are controlling me or they're doing things that I don't want them to do." I think if we've done everything right, it's like a really good human team, right? A good sports team or a good team in any other sense is that they work together so seamlessly that it doesn't seem like one is controlling the other, but everybody's working really efficiently towards achieving the same goal. I think that's where we're going with smarter, with smart parts and better bionic bits.

CW (00:06:36):

I think we've all seen those horror movies where, you know, "The arm made me do it!" But I don't want to -

PP (00:06:41):

Exactly, exactly. And one of my students is actually, one of my graduate students is actually working on, what she wants is, you know, a hand that's just a disembodied hand that might, you know, crawl across the room and go get stuff for you. We've got another set of students working on what we call prosthetic falconry. So you might, instead of having an arm attached to your body, have a quadcopter with a hand that flies across the room and picks stuff up and comes back for you with a prosthetic gyr essentially.

PP (00:07:03):

So we're doing some cool stuff like that. And then you could imagine. "Yeah. Okay. The thing is actually pretty autonomous in the fact that it could actually, you know, move around the room a little." But for the most part...the chance of the system actually controlling you back is very, very low. I think that the Doc Ock query is not something that we have to take too seriously. Although we do have a no Doc Ock rule in my lab. So you can put one extra body part on your body. You can put two on. The minute you put four extra limbs on your body, you're kicked right out of the lab. So we have a no Doc Ock rule.

CW (00:07:35):

[Laughter]. Alright. Before we get any deeper into that, and while Elecia is trying to get off the floor from laughing -

EW (00:07:42):

It's so cool!

CW (00:07:42):

I did want to kind of establish a baseline. I don't think I understand. And I don't think a lot of other listeners might understand what the current state of the art is. If I were to lose my forearm, heaven forbid, next week, what would, you know, if I had the best insurance in the world, what would I end up getting as a prosthetic? And what would that be capable of doing?

PP (00:08:05):

Yeah, this is a great place to start. So...first, I really hope that you actually don't lose any body parts. If you do, you know...drop me an email, we'll see what might be some good suggestions for you. You might get a prosthetic quadcopter [laughter], but -

EW (00:08:17):

He would really like that.

CW (00:08:21):

I'll be right back, gonna get an ax. [Laughter].

PP (00:08:23):

No, no, no. See, this is just flat out no. Although actually just as a side note, this might've come up later on in our conversation, but if you ever get a chance, there's a fantastic book called Machine Man written by Max Barry. It's about a guy who, growing up, wants to be a train. Doesn't want to be a train engineer. He actually wants to be a train. Anyway, he loses one of his legs in an accident. And pretty soon realizes that the leg he built is actually...better than the biological one he's left with. Anyway, it goes all downhill from there. It's a fantastic sort of a dark satirical work of fiction, but it's definitely worth reading. It's on the required reading list for my laboratory.

PP (00:08:58):

So we've got a copy on the shelf, but it fits right in with your question, so no going to get the ax. But to answer your actual question, the state of the art is partially dependent on what kind of amputation someone has. So usually what happens is when someone presents with an amputation at the clinic they'll be assessed and the vast majority of people will get something that isn't actually that robotic at all.

PP (00:09:23):

They'll get something that's what we call a body-powered prosthesis, but it's...essentially a series of cables and levers. So it's something that they control with their body. It's purely mechanical with no electrical parts. And for the most part, a lot of people really like those systems and that they're trustworthy. They respond really quickly. They can sort of feel through the system itself.

PP (00:09:44):

So if they tap the table with it, they can feel it sort of resonating up their arm. Recently there's been a big surge in newer, more robotic prostheses. We call them myoelectric prostheses. But really what this means is that they're recording electrical signals from the muscles of the body.

PP (00:10:01):

So if someone has an amputation, say just above the elbow, then you imagine they might have a socket. They might have something that's put over top of their residual limb or the stump. And they might have sensors that are embedded inside that socket. So...those sensors would be measuring the electrical signals that are generated when people contract their muscles. So when they flex the muscles in that stump and the remaining limb, the system can measure that and use that to control, say a robotic elbow, or maybe a robotic hand.

EW (00:10:29):

Are these flex sensors, or are these like the heart rate sensors that are lights and looking at the response from that?

PP (00:10:37):

So they're actually...a multi-pole electrical sensors. So you're looking at actual voltage differences.

EW (00:10:43):

Oh, okay.

PP (00:10:44):

Yeah, so it just, it makes contact with the skin...Some of them are these little sort of silver domes that sort of just press lightly into the skin. Some of them have these little tiny strips of I think very expensive wire, just that make good electrical contact with the skin.

PP (00:10:59):

But when your muscles contract, you actually, when all those motor units get recruited and start doing their thing, they actually generate changes in the electrical properties of the tissue. So you can really in a very straightforward way measure it. There's actually commercial products now, that you can go down to your favorite consumer electronics store and get something. One of the products is called a Myo made by Thalmic Labs -

EW (00:11:19):

Yeah. SparkFun's also.

PP (00:11:19):

- and it just like, yeah, exactly. And you can easily get one of those and jam it right in, and that's using the same kind of signals. Obviously the clinical systems have a bit more precision to them. And also they're a bit more expensive. [Laughter]. But yeah, so the idea is you measure some of these signals and they can be used to say whether a robotic arm should go up or down, or whether a robotic hand should open or close.

PP (00:11:42):

So in terms of top-of-the-line systems where you have a robotic, let's say a robotic hand, and a robotic elbow for someone, the hand itself might be able to move individual fingers. But the caveat there is that the fingers can typically only move to open or close. What that means is the person would say, pick a grip pattern. Like I want to make a fist, or I want to grab a key.

PP (00:12:03):

And then the hand would just open and close. So they don't really have full control over the individual fingers, the individual actuators. Likewise, the wrist is typically fixed or rigid and people won't be rotating their wrist or flexing their wrist. This is starting to change, but in terms of what we see out there in the clinic, what people are actually fitted with, it's very uncommon to see anything more than say, a robotic elbow with a robotic hand attached to it, opens and closes.

PP (00:12:29):

So that's the sort of clinical state of the art. The fancy-dancy, what might actually be happening soon kind of thing is a robotic arm where there's individual finger control. The fingers can sort of adduct and abduct, so they can move side to side or open and spread your hand, multidegree of freedom wrists, or wrists that move, they flex, they bend sideways, and they also rotate, and also full shoulder actuators.

PP (00:12:55):

So, I mean, if you think about what will be coming down the pipe in another 5 to 10 years, a lot of our colleagues out East and some of those down in the States have done some really, really cool jobs of building very lightweight, very flexible, and highly articulated bionic arms. And those will, I hope be commercialized sometime soon. So we're seeing a big push towards arms that can do a lot.

EW (00:13:18):

But you have to control those. If you want to be able to articulate the fingers and you have an amputation above an elbow, you have to learn how to fire the right muscles to control, to generate that voltage we're reading and send it down to the fingers. It's a hard mental problem, and a lot of work for somebody to be able to use these, isn't it?

PP (00:13:46):

Well, that's if we have a million dollar, if we have The Six Million Dollar Man, that's the $6 million question, is how do we actually control all those bits? And so I really think this is the sort of the critical issue that we're solving, not just with prosthetics, but also with a lot of our human machine interaction technology...I mean, we have sensors, we have really smart folks making really spectacular sensors of all different kinds. We're getting sensors getting cheaper...The density of sensors we can put into any kind of devices is just, it's skyrocketing. Likewise, we have fancy arms. We have really advanced robotic systems that can do lots of things.

PP (00:14:21):

They can do all the things a biological limb can do to a first approximation and maybe someday even more. But the point you bring up is a really good one, that gluing those two things together, is in my mind, the big remaining gap. So how do we actually, even if we could record a lot from the human body, and even if we have all those actuators, even we have all those robotic pieces that move in the ways we hope they would, how do we connect those two? How do we connect the dots? The sensors -

EW (00:14:50):

How do you read people's minds?

PP (00:14:52):

Yeah, that really is, I think the big question, 'cause reading from all the things we could sample from their body is...I really think of it like looking at body language, it's the same kind of idea...we're really good at it as meat computers, we're great at looking at another body and sort of trying to infer the intent of that particular person.

PP (00:15:14):

We're asking our machines to really do the same thing. We're asking them to look at all of the different facets of body language that are being presented by the wearer of a robotic arm. And then the robotic arm has to figure out what that person actually wants. A lot of the time, our engineering breaks down at that scale. So our ability to say, map any combination of sensors directly to any combination of actuators, if I'm recording...if I put a sensor on someone's biceps and on their triceps, so, you know, the bits that make the elbow flex and extend...I mean, all of us could sit down and hack out a quick script or build, hardwire a system that would take the signals from the bicep and the tricep, just sort of maybe subtract them. And now you've got a great control signal for the elbow to make the elbow go up and down.

PP (00:16:02):

In the clinic, this is typically how the elbow control works. But if we start to think about having 10 sensors, hundreds of sensors, if we start reading directly from the nerves of the arm, so the peripheral nervous system, or even recording directly from tens, hundreds of thousands of neurons in the brain, suddenly it's not so clear how you'd go about hand-engineering a sort of fancy control algorithm that takes all those signals and turns them into some kind of control signal for the robot arm. That's the really hard thing. I mean, that's really where the machine learning starts to fit in, where we can start to learn the patterns as opposed to engineer those patterns.

EW (00:16:41):

Okay. And that's how we get to machine learning, which is machine intelligence - actually, do you prefer machine learning, or machine intelligence, or artificial intelligence, or neural nets, or, what are the right words?

PP (00:16:54):

Yeah, the right words are something that I think, "Oh, we're always trying to figure out what the right words are." The most important thing is to sort of pin down what it is that you're actually talking about. I think just starting out from the top is that artificial intelligence is often the wrong word.

EW (00:17:08):

Yes. [Laughter].

PP (00:17:08):

And it's a word, it's a phrase that comes with so much baggage. I think we see it so much in the media and the popular culture. It gets thrown around a lot. I gave a lecture just last week talking about really, I mean, we have people applying AI to what amounts to an advanced toaster and calling that artificial intelligence.

EW (00:17:27):

Yes. Rrr.

PP (00:17:27):

And then arguing about toaster rights, they're saying, "Oh my goodness, this toaster is a existential threat to my real, my ongoing existence."

PP (00:17:36):

And sometimes people are really applying terms like artificial intelligence to just a clever control system and something like a toaster or a robot vacuum cleaner. And then there's people that are thinking really about machines that might some kind of very strong or detailed kind of general intelligence. And we conflate those two together.

PP (00:17:54):

So I think AI, because of all of its baggage, is actually...something that just doesn't really hit the point. The other tricky thing about just talking about intelligence, artificial or meat intelligence, or hardware intelligence. When we talk about intelligence, we often, people often think it's sort of, it is intelligent or it isn't intelligent. I think by casting a term like AI onto the entire endeavor, it really tries to make it very binary, when really we get a gradation.

PP (00:18:22):

I mean, your thermostat is in some level, fairly intelligent. It figures out where it needs to go to keep the temperature in your house right on point. A self-driving car is a different kind of intelligence. A Sony Aibo, one of the little robot dogs. Yeah, you could say that there's intelligence there. And likewise, when we start looking at programs like AlphaGo, the Google DeepMind program that recently took out Lee Sedol in a human-machine match in the game of Go, I mean...you could argue that there's intelligence there.

PP (00:18:50):

Now I'm just gonna keep breaking this down a little bit if that's okay.

EW (00:18:52):

Yeah.

PP (00:18:52):

The intelligence piece is also a bit...soft in terms of how we throw in things like learning. So you asked me about machine learning or machine intelligence. I can imagine, I think a lot of us could imagine, that there might be a system that we would call very intelligent, a system that has lots and lots of facts. Think of a...Watson jeopardy-playing robot-style thing that knows lots and lots and lots of facts. Those facts, let's pretend that those facts have been hand-engineered. They've been put in by human experts. So the system might not have learned at all, but it might exhibit behaviors that we consider very, very intelligent.

PP (00:19:29):

At the same time, we might have systems that maybe we don't think are that intelligent, but that are very evidently learning. I think of some of the more adaptive, machine learning thermostat or something like that, that's actually learning. But I mean, it wouldn't be able to tell you...where Siberia is, or who is the leading public figure in Japan...That's something that is facts versus learning. So intelligence, I think, involves learning. It involves knowing things. It involves predicting the future or being able to acquire and maintain knowledge. And it actually revolves around using that knowledge to do something, to maybe pursue goals or to try to achieve outcomes.

PP (00:20:10):

So I break down intelligence, maybe into machine intelligence, let's be specific about machine intelligence, breaking down machine intelligence into representation, how a machine actually perceives the world. And then prediction, which is really in my mind building up facts or knowledge about the world, and then control, which is in a very engineering sense, being able to take all of that structured information, all of those facts, and then use that to change a system's behavior to achieve a goal.

PP (00:20:39):

So I think that's a nice, clear way of thinking about intelligence and specifically machine intelligence. So when I talk about these kinds of technologies that we work on in the lab, or when I'm talking more generally about what most people say is artificial intelligence, I really do, I prefer machine intelligence because it's kind of clear. We can say, "Yeah, we're talking about machines and we're talking about intelligent machines." It doesn't, there's nothing artificial about it. [Laughter]. If it's intelligence, then it's intelligence.

EW (00:21:05):

Is deep learning a subset of machine intelligence or sort of the same level, but a different word for it?

PP (00:21:14):

So deep learning, I mean, there's a lot of excitement, I'm sure. I'm sure you've seen all of the large amounts of publicity that deep learning's received in recent months and years and for good reason, it does some very, very cool things...In the same way, there are people who are looking at deep learning to do things that we would consider very, I guess, higher level intelligence tasks, looking at things like manipulating language and understanding speech, is already what we might consider to be a very intellectual pursuit.

PP (00:21:44):

And there's also deep learning, which is being used for some fairly specific applications, things that are maybe what we consider less general in terms of intelligence, but more like...a targeted or a specific function. So, I mean, one thing we've looked at is applying deep learning to some laser welding. So looking at how we could use it to see whether or not a laser weld might be good or bad. This is just one project I worked on with one of my collaborators.

PP (00:22:09):

And that, I mean, that's a very, it's not what I would consider a system that has very general intelligence when you compare that to something like a language translation system, like some of the things that Google has been working on with deep learning, to be able to generally translate between multiple languages. That we'd consider a higher level kind of intelligence. Still, not really a general intelligence. You wouldn't like stick that in your Roomba, and it goes around and suddenly bakes you toast and then writes a dissertation on...ancient Chinese poetry. That's another step up the ladder, I think.

CW (00:22:41):

Maybe a couple steps.

PP (00:22:42):

Maybe a couple steps. Yeah. Maybe one, maybe two. [Laughter]. But deep learning, yeah, it's a step in the right direction, it's a step in a direction that leads us towards more complex systems that might have more general capabilities.

EW (00:22:58):

So when I think of deep learning, it's about taking an enormous amount of data and throwing it at a few different algorithms that are pretty structured, and it leads to neural net-like things. And you can't always see inside of deep learning. You want to know, you want to build a heuristic instead, don't go the deep learning path. That's not going to, you're not going to go there. Is that right? Or am I, it's been a long time since I've learned the difference between these things?

PP (00:23:30):

Yeah. So deep learning, deep neural nets especially, most of the time when we speak of deep learning, we're really talking about a deep neural network and people have been working, there's some very nice maps you can find on the internet showing the different kinds of deep nets and the different ways that they're structured. [Laughter]. Some of them are more interpretable than others.

PP (00:23:49):

In essence, you're very right. You're taking in a lot of data. And I think one way that may be the clearest way to start separating out the different kinds of machine learning and machine intelligence that we might want to play with, as engineers, as designers, as just interested people, is to think less about the usual way we label things. Deep learning is typically a case of what we call supervised learning. And there's unsupervised learning as well, which also leverages deep nets.

PP (00:24:17):

And then there's...the field that I work in called reinforcement learning. But maybe more clearly we could say that a lot of the cases of deep learning that people use deep learning for are actually cases of learning from labeled examples.

EW (00:24:31):

Yes.

PP (00:24:31):

So it's you give a ton of examples. And each of those examples has a usually human-generated label attached to it. So you're going through the internet. You're like, I want to find pictures of Grumpy Cat. And so you show a bunch of images and then the system says, "Yeah, Grumpy Cat. And you're like, "No, that wasn't Grumpy Cat" or "Oh, Grumpy Cat. Yeah, that was." The system adapts its internal structure. It changes its weights so that it better lines up the samples with the labels. So a lot of what we see in deep learning, the majority, I think, is a case of learning from labeled examples. Now -

EW (00:25:00):

So you already know what the truth is when you go in.

PP (00:25:04):

Absolutely. Now for training, so this is also something that we see a lot...especially with deep nets is that you usually have a phase of training, many complex heuristics have been developed to try and figure out how to train them correctly.

EW (00:25:17):

Yeah.

PP (00:25:17):

And there's some really smart people working on that. I don't work on that because there's plenty of other smart people solving those problems. But the idea is that you find a way to train it, usually on a batch of data. And now...you have other examples during deployment. Let's say you have, now you have a Grumpy Cat detector that you've sent off into the world and has to do it's job. And it now sees new examples of photographs. And it has to say yes or no, or say what that photograph actually is. Or what that string of speech is.

PP (00:25:44):

So the deployment systems will now be seeing new data that has not previously been presented. So this is a training and a testing paradigm. That's one of the important things as well about the usual way that we deal with learning from labeled examples. You build some kind of classifier or some kind of system that learns about the patterns in the information. And then you would deploy that system. Typically -

EW (00:26:05):

You make it sound so easy, but yes. [Laughter].

PP (00:26:09):

I make it sound so easy. It's actually not. Actually, I think as we were just comparing our notes really before the show...often one of the most difficult things is just installing all the right software packages. I think sometimes that's one of the most challenging bits, but the understanding of the concepts is actually, none of it's really that fancy or that tricky. When you think about it at the highest level, really it's like saying, "Hey, yeah, I have this machine, this machine has some internal structure. I show it a sample of something. I show it an example and I tell it what that thing should be." And it just sort of shifts itself around. It jiggles its internal structure in a really nice way, so that it's better able to say the thing I want it to say when it sees another example that's close to the one I showed it.

PP (00:26:46):

So that's what I mean by, what we usually mean by supervised learning. It covers a lot of what we consider deep learning. And the only thing that makes it deeper is that how many, how complex is that internal structure...of that thing that jiggles. So the internal structure that changes to better line up samples with labels, when we look at deep learning, as opposed to earlier work on multilayer perceptron or the one or two layer neural nets, we're just adding the complexities, that internal system, and the way that it, that pieces interconnect with other pieces.

PP (00:27:16):

So we're just dialing up the complexity a bit. And because of that, the kinds of relationships, the kind of sample label pairs that can be learned, gets a lot more powerful. We get more capacity out of that. But in essence, it's very much the same thing as before, but more.

EW (00:27:31):

Yes.

PP (00:27:31):

Just the training bit, actual method for going about updating that black box, that deep neural net, that's one of the things that becomes even more complex now than it was in previous years.

EW (00:27:43):

But when we talk, when you talk about smart prosthetics, it's hard to get million point samples for a human who just went through something pretty traumatic, like losing a limb.

PP (00:27:54):

Yeah.

EW (00:27:54):

And their samples aren't going to apply to somebody else's because our bodies are different.

PP (00:28:02):

Yeah.

EW (00:28:02):

So you don't do this type of deep learning, do you? You mentioned reinforced learning.

PP (00:28:10):

Yeah, so that's actually great. So let's just jump into reinforcement learning because that is the area, that's my area of specialty, my area of study and the area where most of my students do research. So I talked about learning from labeled examples being the general case that we see in machine learning, and one of the most, the areas of greatest excitement. There's also what we could consider learning from trial and error.

PP (00:28:32):

So when I say reinforcement learning, I actually do mean learning from trial and error. And the kind of learning I work on is a real-time learning approach. So instead of trying to have a training and a testing period where you show a large batch of previously recorded data, the systems we work with are essentially dropped in cold, so they could be attached to a prosthetic arm, they could be attached to a mobile robot.

PP (00:28:55):

And while that system is actually operating, while that system is interacting with the person or the world around it, it's learning, it's learning all the time, and it's changing itself all the time. So the data that's being acquired is actually readily available. And it's available from the actual use of the system.

PP (00:29:11):

So this is the case where we're learning from, instead of...I think of it instead of learning from a vat of data, we're learning from a river of data or a fire hose of data, the information that's currently flowing through the system and flowing by the system. So it's a different kind of learning, and it's...a nice thought that we can have systems that not only learn from stored data, but can also learn from real ongoing experience. So that's the area we work in.

CW (00:29:36):

So could you do something like, I know some of the self-driving car manufacturers have their software on, but it's not actually doing any self-driving, it's in shadow mode. Do you do any training where, "Okay, somebody lost one arm, but they have a good right arm," let's say. Could you do any training with the good arm and say, "Okay, this is how this works, and this is where these signals are, and this is how this person uses this" and then apply it to the prosthetic later?

PP (00:30:03):

Oh, that is actually exactly what we're doing right now. So, one of my students is, we're just finishing up a draft of a research paper to submit to an international conference. And this student's work on that paper, and actually that student's thesis is really about that very idea. Where you could imagine if you have someone who's lost one arm, but they have a healthy biological arm on the other side, you could just have the biological arm doing the task, again, cutting vegetables, or catching a ball, or doing some complex task.

PP (00:30:31):

And you could have the other, the robotic limb just watching that, essentially seeing what needs to happen and actually being trained by the healthy biological limb. And you could have this in...sort of a one-off kind of fashion where you show it a few things and it's able to do it, or you could have it actually watching the way that natural limbs move in an ongoing fashion and just getting better with time.

CW (00:30:51):

[Hmm.]

PP (00:30:51):

So it's a really, that's a great insight, is that yeah, we could actually have a system learning. And actually the way the students, teaching the arm, is that it actually gets rewarded or punished [laughter] depending on how close it is to the biological limb. So I talked about reinforcement learning, and if we get right down to it, that's essentially, learning through trial and error is learning through reward and punishment.

PP (00:31:13):

So like you'd train a puppy, we're training bionic body parts, or any other kind of robot you'd like. When the robot does the right thing or when the system does the right thing, it actually gets reward. And its job is to maximize the amount of reward it gets over the long term. So that's the idea of reinforcement learning, is the system not just wants to get reward right now, but it wants to acquire reward, positive feedback, for a extended future, for some kind of window into the near or far future.

EW (00:31:42):

Okay. Digging a little bit more into this, because I'm just fascinated. We are mostly symmetric creatures and sure, chopping vegetables is something that you do with one hand, and you kind of have to do it with one hand because the other hand is used for holding the vegetables. But as I sit here gesturing wildly, I realize I am mostly symmetric with my gestures. Do you worry about that sort of thing as well? Or are you mostly task-oriented?

PP (00:32:13):

A lot of what we do is task-oriented. So specifically, I do many things. Some of the things we do are wild and wacky. Like we have the third arm that you connect to your chest, and we're looking at how to control the third arm that you wear.

EW (00:32:24):

[Laughter].

PP (00:32:24):

We've got the prosthetic falconry, we've got all this other weird stuff that we do. And I really enjoy it. We actually, one of my students is building a "go go gadget" arm. So he's building a telescoping forearm so that if you lose an arm, maybe you could have an arm that stretches out and grabs stuff, something our biological limbs couldn't actually do. So in those cases...the symmetry might be lost. You might not have another arm on the other side coming out of your chest. You might not have a telescoping forearm on your healthy arm 'cause only your robot arm can do that.

PP (00:32:51):

But in the cases where we are looking at people that have a arm that's trying to mirror the kind of function we see in a biological limb, a lot of what we look at is very task-focused. So we're looking at helping people perform activities of daily living. So the activities that they need to succeed and thrive in their daily life and to make their daily life easier.

PP (00:33:11):

So we do start and often finish with actual real-world tasks. Now this is a nice gateway towards moving to systems that can do any kind of motion. So the training example, that sort of learning from demonstration that we just talked about, where the robot limb learns from the biological limb, that's sort of a gateway towards systems that can do much more flexible or less task-focused things. But we usually start out with tasks and we validate on tasks that we know in the clinic are going to be really important to people carrying out their daily lives.

EW (00:33:45):

Okay. So what about the internet? Are these...prostheses going to be controlled with my smartphone? So instead of it knowing it's Thursday and time to make soup, now I can tell it, "Go into soup mode?"

PP (00:34:02):

...So...this gets towards the conversation on what sensors are actually needed. So right now, just the general state of things is that the robot limbs, the ones that we would see attached to someone in the clinic are typically controlled by embedded systems. We have small microcontrollers, we have small chips that are built onto boards and they're stuck right in the arm. There's a battery. The chips are very, very old, usually. They're not that fancy. They're not that powerful, they don't store data. There's actually very little even closed loop control that goes on the typical systems in most prostheses. Now for lower limb, for leg robots...I'll soften that constraint, but for the upper limb, often we're not seeing devices that have that much complexity.

PP (00:34:46):

Those are not internet enabled. They...do not connect to other devices around them. Only very recently have we seen robotic hands that now connect to your cell phone via Bluetooth and are able to say, move or change their grips, depending on what you tell it through your cell phone. There's also examples of what we call grip chips. ...Some of the commercials suppliers have built essentially little RFID chips that you hang around your house so that when you go to your coffee maker, your hand will appreciate into the coffee cup holding shape.

PP (00:35:15):

So we're starting to see a little internet of things essentially surrounding prosthetic devices. But it's still, I think maybe not in its infancy, but maybe in its toddler phase in terms of what could happen when we begin to add in say, integration with your calendar, integration with the other things that permeate our lives in terms of the data about our patterns and our routines that might really make the limb better able to understand human needs, human intent and human schedules, and fill in the gaps that we can't fill in with other sensors.

EW (00:35:46):

But there are a lot of sensors being used in various medical and nonmedical ways to help us get to better health. Fitbit is the obvious case with lots of data. And it has changed people...I feed my Fitbit, you know, "Let's go for a walk." But are we seeing the same sort of things through rehab and physical therapy? Are there tools to help people that are sensors and IoT connections?

PP (00:36:26):

Yeah. So there's, in terms of new, I think new sensors is actually one of the areas where we'll see the most progress in terms of increasing people's ability to really use their technologies. A lot of...what's limiting current devices, I mean, some of the control is not intuitive. The control is a bit limited. And the feedback back to the human is also quite limited.

PP (00:36:48):

A lot of that could be, I think, mitigated...if we give the devices themselves better views into the world. So this gets back towards what you're saying. I mean, you could imagine that we have things like, a series of, we have Fitbits, we have other ways of recording the way the body's changing in terms of how much you're sweating, what's happening around it, the humidity of the air. There's many sensors we could add that would sort of fill in the gaps for a device.

PP (00:37:13):

So at the conferences, at the research level, we're seeing a ton of interest in this space. So there's people that are building ultra-high density force sensing arrays that you could put inside a prosthetic socket. So we can actually feel how all the muscles in that residual limb are changing. There's people who are building things, they're putting accelerometers, they're putting inertial measurement units, all these different kinds of technologies. There's embeddables.

PP (00:37:37):

So there's embedded sensors. So sensors that are implanted, little grains of rice, implanted directly into the muscles of the body. These are also research prototypes that are, I think already in clinical trials or beyond now, where you actually have wires, technology embedded right in the flesh itself, so that you can take readings directly from the muscles, directly from the nerves themselves, and directly from all the other bodily functions that begin to support these devices.

PP (00:38:01):

So this is an area where we're going to see a huge, let's get back to our earlier conversation about how you start mapping all of those pieces of information to the control of motors. But we're actually seeing a huge surge in interest, in different sensory technologies. Even for people that haven't lost limbs, I mean, this is, just devices again, like I mentioned the Myo earlier, because I think it's, and there's also, I think, the EEG headset. One of my students has one, we're using it for research, but the meditation-supporting EEG headset with a couple of EEG electrodes in the front, I think is the Muse -

EW (00:38:32):

Okay. No, no, no. I I've seen these. I've played with them.

PP (00:38:35):

Yeah, yeah, yeah.

EW (00:38:35):

I have never seen one that had any repeatable results.

CW (00:38:38):

Oh, really? No...they have some that control video games and stuff. You can, you have to learn to concentrate on like, on anything. I think it just measures concentration. I've seen it work.

EW (00:38:48):

Well, I mean, you could do that by just measuring how much the muscle in my forehead moves.

PP (00:38:52):

Yes.

CW (00:38:52):

Sure.

EW (00:38:52):

You don't have to do anything interesting.

CW (00:38:55):

Yeah but it's cooler to have it on your brain.

EW (00:38:56):

Yeah, but...I've never had it be repeatable beyond what you could tell because I had a line between my eyebrows.

PP (00:39:04):

Yeah. And that's okay. So I think if we focus on trying to, I like to think of signals...this is my view. This is sort of my default view of how we approach presenting information to our machines and how I actually think about the information itself, is that we never label any signals. So when I stick signals, or when I measure things from the human body and I stick them...into say a machine learner, when I actually give some kind of set of information to a reinforcement learning system, they're just bits on wires. So the nice thing is that it doesn't actually, it doesn't at least to me anyway, matter, to our machine learners, matter if the contractions in the facial muscles, or if it's actually EEG that's leading to discriminating signals.

PP (00:39:46):

And so if we can actually get any kind of information, it doesn't have to be clean information. It can be, noise is just information we haven't figured out how to use yet. So if we actually can think about recording all the, more signals, lots of signals, the system itself can figure out how to glean the best information from that soup of data. So I'm not worried actually, it's actually a very sort of relaxing and refreshing view into the data, is that I'm not so worried about whether or not it's one kind of modality or another, or whether or not it's even actually consistent as long as there's certain patterns.

PP (00:40:17):

If there's no patterns that, I mean, we can say maybe that sensor is not going to be useful, but, that's more of a, do we put the expense of actually deploying that sensor as opposed to do we give that sensor as input to our learning system. In many cases, the learning system can figure out what it uses and what it doesn't.

PP (00:40:32):

And sometimes what it figures out how to use is actually very clever and sometimes buried in that sea of noise or the sea of what we think is unreliable signals. It's actually a very reliable signal when you put it in the context of all the other signals that are being measured from a certain space.

PP (00:40:49):

So it's actually a very cool viewpoint where you're like, "You know what, here, just have a bunch of bits on wires." And then you think about the brain. And you're like, "Hey, it's also kind of like a bunch of bits on wires." No one's gone in and labeled the connections from the ear to the brain as being audio signals. But they're still containing information that comes from the audio. So anyway, it's a neat perspective.

CW (00:41:09):

No, that's a really interesting way of thinking about things, because when you think about machine learning and deep learning, often the thing people bring out is, "Oh, well, we don't really know what's going on inside the system," but now it's, "We don't even know what's going into it." It gets signals. And it makes, does pattern, I mean, that's how our brains work. We make patterns out of things and we don't necessarily know what their provenance is.

PP (00:41:33):

Yeah. It's even more, it's actually quite funny when I think about the things we do on a very daily, on a regular daily basis with the information we get. So a very standard, a very smart and usual engineering thing to do would be to take a whole bunch of signals. You've got hundreds of signals and you're like, "Okay, let's find out how to sort of reduce that space of signals into a few important signals that we can then think about how to make control systems on, or we can think of a way to clearly interpret and, and use in our designs."

PP (00:42:03):

Usually we're trying to take a lot of things and turn them into a few things, almost exclusively, every learning system that we use takes those things, let's say we have a hundred signals, and it might blow that up into not just a hundred signals, but a hundred thousand or a hundred million signals. So we're actually taking -

EW (00:42:17):

Combinatorically.

PP (00:42:18):

Yeah, we're essentially taking a space and building a very large set of non-linear combinations between all of those signals. And now the system, the learning system actually gets that much larger, that much more detailed input space that contains all of the correlations and all these other fancy ways the other information is relating to itself. It now gets that as input.

PP (00:42:41):

And even if you don't do a deep learning, like we,...some of my colleagues have published a paper on shallow learning, which says, "Hey, you know, all the stuff you can do with deep learning, if you think of a really good shallow representation, like a single layer with lots of inherent complexity, you can do the same kinds of things." So you can think of that.

PP (00:42:57):

It's like, "Yeah, let's just take a few signals and blow them up into lots of signals that capture the non-linear relationships between all of those other input variables." It's kind of cool, but it's kind of weird and it scares the heck out of, especially some of my medical or my engineering collaborators, where I'm saying, "Yeah, yeah, no, this is great. No, we're not going to do principal component analysis. We're gonna do the exact opposite. We're going to build this giant, non-linear random representation or a linear, random representation out of those input signals." It's kind of cool.

EW (00:43:23):

Do you ever associate a cost with one of the signals? I mean, as a product person, I'm thinking, all of these sensors, they do actually have physical costs.

PP (00:43:35):

Yeah.

EW (00:43:35):

And so if you are building a representation in machine learning world, do you ever worry about the cost of your input?

PP (00:43:46):

Absolutely. And the cost of the input is not even just the physical costs, but also things like the computation costs. A lot of what I do is real-time machine learning. I'm hoping that I can have a learning system that learns all the time and not just all the time, but very rapidly, so many, many, many, many times a second.

PP (00:44:03):

And so as we start to add in say, visual sensors, if you want to do any kind of processing on that visual input, that the camera inputs, you're starting to incur a cost in terms of the rate at which you can get data. So there's physical costs that we do consider, there's also the computational costs and just the bulk of those particular signals. So we do consider that. There's interesting ways that the system itself can begin to tell us what signals are useful and which ones aren't.

PP (00:44:29):

So when we start to look at what's actually learned and how the system is associating signals with outputs, we can actually say, "Oh yeah, you know, maybe this sensor isn't actually that useful after all." There's some new methods that we're working on in the lab right now, actually, that are looking at how the system can automatically just sort of dial down the gains, let's say, on signals that aren't useful. So it's really easy then for us...to go through and say, "Hey, okay, the system is clearly not using these sensors. Let's remove those sensors from the system and with them, those costs and those computational overheads as well."

EW (00:45:01):

Yeah. There's the computation, the physical, the power, all these costs.

PP (00:45:06):

Absolutely and power's a big one, especially with wearable machines.

EW (00:45:08):

Yeah.

PP (00:45:08):

I think you see this a lot...with embedded systems. We have to care a lot about how long our batteries run. If you're going out for a day on the town and your prosthetic arm runs out of batteries in the first half an hour, that's not going to be good.

PP (00:45:22):

So we do have to be very careful about the power consumption as we start putting, especially when we start putting learning systems on wearable electronics and wearable computing. You think of a shirt with embedded machine intelligence. Let's say you have a Fitbit writ large, you have a fully sensorized piece of clothing that's also learning about you as you're moving...we want these systems to have persistence in their ability to continue to learn.

PP (00:45:47):

You don't want them to stop being able to learn or to capture data. And so that's actually one of the really appealing things about the kinds of machine intelligence we use, the reinforcement learning and the related technologies, things like temporal difference learning that underpin it, is that, it's computationally very inexpensive.

PP (00:46:02):

It's very inexpensive in terms of memory. So we actually can get a lot for a little. We're working on very efficient algorithms that are able to take data and not have to store all of the data they've ever seen. Not have to do any processing on that data and be able to sort of update in a rapid way without incurring a lot of computation costs. So that's a big focus, is building systems that can actually learn in real time, not just for 10 minutes or 10 hours, but forever.

EW (00:46:30):

That's a hard problem -

PP (00:46:31):

Yeah.

EW (00:46:31):

- because maybe I don't want to make soup every Thursday.

PP (00:46:35):

Yeah. So then what, so that's a really, I like that example as well, because the question is maybe not, when do I, how do I build a heuristic or how do I build some kind of good rule of thumb to say when I do and don't want something, but what other sensors...it doesn't have to be a sensor. Think of any kind of signal. What other signals might we need to let the machine know what we want and to let it know when something is appropriate or not appropriate?

PP (00:47:02):

Actually, let's go back to the, remember I mentioned we're building that go go gadget wrist. I'm building a telescoping forearm prosthesis. So you can imagine that there's two very similar cases that we'd want to tell apart. One is...picking up something from a table where you're reaching downwards, and you're going to close your hand around, let's say, a cup of tea.

PP (00:47:21):

And the other is you're shaking hands with someone. And in one of those cases, if you're far away from the thing you're reaching, maybe it's appropriate for that arm to telescope out words and grab. If you're shaking hands with someone, maybe it's not appropriate. 'Cause it's going to telescope out and punch them in the groin, right? [Laughter]. So no one wants to be punched in the groin. So the system itself maybe has to know when it might expect that this is appropriate or not appropriate.

PP (00:47:42):

One of the ways, one of the cool ways that we're getting some leverage in this particular sense is that we're building systems to predict when the robot might be surprised, when the robot might be wrong.

EW (00:47:55):

[inaudible], yeah.

PP (00:47:55):

So it's one thing to know when you might be wrong or to be able to detect when you're wrong. It's another thing to be able to make a forecast, to look into the future. Just a little ways or a long ways and actually begin to make guesses about when you might be wrong in the future. So if it's, you know, "Okay, I think it's Thursday. I think I'm going to make soup". We're good. If there's actually other things that allow the system to begin to make other supporting predictions, like, "Hey, I actually think that this prediction about making soup is going to be wrong," we can start to then dial the autonomy forward or backwards in terms of how much the machine tries to fill in the gaps for the person.

PP (00:48:34):

It's a really cool, it's a very, very sort of wild and wooly frontiers direction for some of this research. But...I have a great example where the robot arm's moving around in the lab and you actually try to shake it's hand and it's surprised. And it starts to learn that, "Oh, wow, every time I do this, someone's going to monkey with me in ways that I've never felt before."

PP (00:48:53):

...I have one video where I put little weights in it's hand and I hang something off it's hand, and then occasionally I bump it from the bottom and it learns that in certain situations it's going to be wrong. It doesn't know how it's going to be wrong, but there's certain use cases, certain parts of it, it's daily operation where it's going to be wrong about stuff. And it can start to predict when it might be wrong. It's very rudimentary, but it's a neat example of when we might be able to not only fill in the gaps, but also allow the system to know when it shouldn't fill in the gaps.

EW (00:49:23):

Are you creating anxiety in your robots?

PP (00:49:27):

That's a great question. [Laughter]. That is a really good question. I hope it's not anxious [laughter]...I actually worry about this now. We do a lot of personifying our systems. And I don't know, is that anxiety? I guess it is maybe, I always think about it when I'm giving a demo of this. I kind of think of that. You know, when I'm sitting at home watching Netflix or something or having tea, I'm not expecting, I predict I'm not going to be surprised. When I'm walking down a dark alley in a city I've never been in before I do. I do predict that I might be surprised and I'm a little more cautious and maybe that's anxiety. [Laughter]. So in that case, maybe, yeah, maybe we're making anxious robots. I'm not sure this is... I don't know, poor things.

EW (00:50:07):

Okay. Back to smart devices and smart prosthetics.

PP (00:50:13):

Yeah.

EW (00:50:13):

Prostheses, I'm going to go with prosthetics 'cause I can say it. Why, what are some of the reasons people give for not wanting to go in this direction? I mean, you've talked about, we've talked about cost, you've talked about battery life and, and lack of dependability. Are there other reasons? Do you hear people worrying about privacy or other concerns?

PP (00:50:38):

Yeah, so privacy's, I think maybe because of the lack of really high performance computing and conductivity in prosthetic devices at present, the privacy argument is something I haven't heard come up at least very much in any of the circles, either clinical or the more in-depth research circles that I've associated with. One very common thing that people want is actually, is cosmetic appearance. So there's multiple classes of users, much like multiple classes of users for any technology. You have the people that, you know, want the flashiest newest thing with all the chrome on it. And all the, you know, the oleophobic glass. And it has to look great. There's people who are early adopters of very cool tech. And there's people -

EW (00:51:21):

I want it to have as many LEDs as possible.

PP (00:51:24):

Exactly. Right. You want this thing to have ground effects. [Laughter]. And then you have other classes where they want to do the exact opposite. They don't want to stand out.

EW (00:51:33):

Yeah.

PP (00:51:33):

So we see this as well with users of assistive technologies. This is everything from prosthetics to, you might imagine exoskeletons, to standing and walking systems, to wheelchairs. There's cer -

EW (00:51:44):

Even canes.

PP (00:51:45):

Even canes.

EW (00:51:45):

Yeah.

PP (00:51:46):

Yeah, that's a really good point, actually. Even canes, you have some people that don't want to be seen with a cane or use a cane, and if they have a cane, it should be inconspicuous. And there's some people that are like, "No, this thing better be a darn good looking cane."

CW (00:51:57):

Have a skull on top and diamonds and spikes.

PP (00:51:59):

And diamonds in the eyes. Exactly. Yeah. So this is, I think I'd probably be in the latter category where I'd want a flashy-looking cane if I had a cane or at least a very cool cane, if it's not flashy. But for prosthetics as well, we see some people that like to have the newest technology, they deliberately roll up their pants or roll up their arms. So they, people can see that they have this really artistically-shaped carbon fiber socket with carbon fiber arm.

PP (00:52:23):

It looks cool. People get it airbrushed, like a goalie mask in hockey. They'll actually have really artistic designs airbrushed on their arms. There's even, again, we're looking a lot in the lab at non-physiological prostheses, because, by that, I mean, prostheses that don't look or operate like the natural biological piece. So you can imagine having a tool belt of different prosthetic parts, you can clip one onto your hand when you need to go out into the garage and do work. So there's a class of -

EW (00:52:48):

I want a tentacle. I want to inject that right now, before you go anywhere. I want a tentacle.

PP (00:52:54):

I know, and this is one of the things we really want to build for you. [Laughter]. No, not you, 'cause you need to not lose your hand. But we actually talk a lot about building an octopus arm. That's one of the most common things that we talk about, yeah, why wouldn't -

EW (00:53:03):

Yes. Oh yes.

PP (00:53:04):

Right? Why wouldn't someone want to -

CW (00:53:07):

You're way too excited about that. Not you, her. [Laughter].

PP (00:53:07):

Why wouldn't someone want to - yeah, but it's a good point, is that there are certain, there's a certain user base. [Laughter]. I think it's a smaller user base, but it's that one that would like to have really cool, unconventional body parts. Then there's a whole 'nother class that might be willing to sacrifice function for appearance.

PP (00:53:32):

So cosmesis, a prosthesis that doesn't have any function at all, but has been artistically sculpted to look exactly like its matching biological limb. So there's actually a whole class of prostheses where someone's gone in, they'll do a mold or a cast of the biological arm. They'll try to paint moles. They'll try to put hair on it. They'll try to make it look exactly like the matching biological limb or the other parts of the person's body, including including skin tone and things like that.

PP (00:54:01):

Most of those don't even move. They're very lightweight and they just strap onto the body. And you can't tell, unless you look very carefully, that that person actually has an artificial arm. You can imagine the same thing for eyes. If you were trying to have a really nicely sculpted artificial eye that's just a ball of glass, but it looks like your other eye, and it's almost indistinguishable from your actual eye.

PP (00:54:23):

So there are cases where people will choose to have something that looks very appropriate, but doesn't actually do anything except except look like a biological limb. So those are, it's totally valid choice as well. But it depends on that person's needs, that person's, what their goals are and what they're trying to do. So we do see, I think more than privacy, we do see a push towards limbs that are very cosmetically accurate. Also lightweight, things like we talked about battery function.

EW (00:54:50):

Lightweight, yeah.

PP (00:54:50):

Lightweight. Function is a huge thing. Intuitive control -

EW (00:54:54):

Yeah.

PP (00:54:54):

- it's really unfortunate, but for the majority of the myoelectric prostheses, the robotic prostheses, we see, we actually do see a really large, what we call a rejection rate, or people saying, "Hey, I don't want to use this anymore." And this means that what could be a hundred thousand dollar piece of technology paid for by the health system, it goes in a closet because mainly it's hard to control.

PP (00:55:16):

And there's, this is actually one of the coolest areas, I think that I'm really excited, there's a company our colleagues down in the States, down in Rehab Institute of Chicago, have spun off a company called Coapt, but it's a company that's doing essentially pattern recognition. So they're using a classification system that allows people to essentially deploy pattern recognition. That's what it's called as a form of machine learning in prosthetic limb control.

PP (00:55:39):

So now the system can, after training, so you press the button, you train the system, it monitors some of the patterns in the arm, the muscles, the way the muscles are contracting. And it learns how to map those to say, "Hand, open, close" or "Wrist rotation". And people are actually getting much more intuitive control. It's much more reliable. And for instance, they might be able to control more different kinds of bits for their arms. So you might be able to get an elbow and a hand instead of just having a hand. So there's some really cool ways that machine learning is actually already being used to start reducing that control burden.

PP (00:56:12):

But I think that's one of the biggest complaints that we see, is that this thing's hard to control and it's not reliable. And sometimes after I sweat a bit, or after I fatigue, it just starts fritzing out. So, yeah, I'm going to go back to using a simple hook and cable system, something where there's a little cable that opens and closes, a spring-loaded hook, because it actually does what I want it to do all the time. Actually -

EW (00:56:34):

All the time, yeah.

PP (00:56:35):

All the time. Do you, you may have seen Cybathlon. So this is actually a good segue into Cybathlon. It was this awesome competition of assistive technologies. It was hosted in Switzerland, it was in the Swiss arena, just outside of Zurich. It was last October and -

EW (00:56:48):

But this isn't the Paralympics.

PP (00:56:51):

No, it is not the -

EW (00:56:51):

This is, those you're trying to do as well or better than the normal human body -

PP (00:56:56):

Yeah.

EW (00:56:56):

- the stock human body.

PP (00:56:58):

Yeah.

EW (00:56:59):

Forget normal.

PP (00:56:59):

Yeah.

EW (00:56:59):

But this, Cybathlon, that's what it's called?

PP (00:57:05):

Yep. Cybathlon.

EW (00:57:05):

That's improving, but we have to do better. I mean, there were people in the Paralympics and actually in the Olympics who had legs and they were, there was some controversy of whether or not it was easier to run on those.

PP (00:57:21):

Yeah. Like the carbon recurve legs -

EW (00:57:23):

Yes.

PP (00:57:23):

- where if you don't want to turn those things can go, they can go very, very fast.

EW (00:57:28):

They have better spring constants than our legs do.

PP (00:57:32):

Yeah. So it's neat. The Cybathlon is different in that respect, in that it's actually saying, "Hey, we're going to put a person and a machine together and see how well they can do." And they actually call the people that are using the technologies "pilots." So you might pilot a functional electrical stimulation bike, or pilot an exoskeleton, or pilot a prosthesis. So it was a really...it's almost like the Formula 1 to the stock car racing.

PP (00:57:54):

But in this case, there was people using wheelchairs that would actually climb upstairs. There were exoskeletons, they were very cool lower leg prostheses. And the person who actually won the upper limb prosthetic competition was using a body-powered prosthesis. So a non-robotic prosthesis. And it's because the person really tightly integrates with that machine. And there's technical hurdles for some of the robotic prostheses.

PP (00:58:18):

These are just not the same level of integration. So things like the Cybathlon are a great way that we can begin to see how different technologies stack up, but also really assess how well the person, the machine are working together to complete some really cool tasks.

PP (00:58:32):

And it goes beyond just how fast can you sprint to, "Hey, pick up shopping bags and then open a door and run back and forth across an obstacle course." Your wheelchair has to be able to go around these slanty things and then climb upstairs. It's a neat way to start thinking about the relationship between the person and the machine, and start to allow people to optimize for that relationship.

EW (00:58:54):

As we talk more and more, I keep thinking how a camera is probably one of the better sensors for solving this problem, because you can solve the soup mode problem, because if you get in the kitchen, you might be making soup. But you can also use the camera to communicate with your robotic arm. You know, I want, you have a special thing you do with your wetware that you show the camera, "I want my other hand to look like this" and the other robot then makes the gripping motion. This all makes a lot more sense if you can see my hands.

PP (00:59:32):

It really does. We actually, one of my students has built a very cool new 3D printed hand. We'll actually be open sourcing it hopefully sometime in the coming year..., we're building a new version of it. But it's, in addition to having sensors, again, I'm all over sensors. We have sensors in every knuckle of the robot hand. So it knows where its own digits are. It's also got a camera in the palm. So perfectly [inaudible]-

EW (00:59:55):

What kind of sensors do you have in the - ?

PP (00:59:55):

They're little potentiometers.

EW (00:59:57):

Okay.

PP (00:59:57):

They're really simple sensors, nothing fancy. We've got some, force sensors in the fingertips. We're adding sensors every day. So we're putting things on, but cameras in the palm and maybe the knuckles are, as you pointed out, really natural.

EW (01:00:09):

Yeah.

PP (01:00:09):

And either to show things, or even just as simple as, "Hey, I'm moving towards something that's bluish." Let's not even talk about fancy. A lot of people love doing computer vision. You're like, "Oh, Hey, let's find the outlines of things and compute distances." Really it's even simpler than that. Like, "Hey, what's the distribution of pixels? What kind of colors are we looking at here? Is it soupish? Is it can of sodaish? Is it door knobish?"

PP (01:00:31):

There's patterns that we can extract even from the raw data that, you're right, cameras are great. Mount cameras everywhere. They're getting cheaper and cheaper. So put them on someone's hat when they're wearing their prosthesis. Now their prosthesis knows if they're out for a walk or they're in their house. There's a lot of things we can do that will start to...linking up to the cell phone so that you, maybe, either using the camera or even just the accelerometers so we know...if they're walking or sitting down, or -

PP (01:00:54):

It's very easy to start thinking about sensors we already have. And the camera, as you pointed out is a really natural one, especially if we don't do the fancy-dancy computer vision stuff with it, but we just treat it as a, "Hey, there's lots of pixels here. Those, each pixel is a sensor. Each pixel gives us some extra information about the relationship between the robot and the person in the environment around them." So that's a great point. Yeah. Right on target there.

EW (01:01:18):

If you've ever tried to tie your shoes without looking, you do use your eyes to do a lot of these things.

PP (01:01:24):

[Laughter]. Yeah, yeah.

EW (01:01:24):

It's pretty impressive.

PP (01:01:27):

Yeah. And, I mean, when you're connected up to your meat, where when you have like a full arm and you have, all of our biological parts are connected, we have this nice relationship. We have this feed, we have feedback loops. We have information flowing. When we have a disconnect, when we suddenly introduce a gigantic bottleneck between part of the body and another part of the body, and here, I mean a robotic prosthesis and the biological part of the body, the density of connection goes down.

PP (01:01:50):

So feedback is diminished and also the signal is going the other direction. So you can think about ways to make the best of that choke point by saying, "Hey, well, we've got cameras on the biological side. We call them eyes. Well, let's put a camera or two on the robotic side. Let's put other kinds of sensors there that are like eyes, and hey, maybe now the two systems are on the same page."

PP (01:02:11):

We get around that choke point by making sure that the context is the same for both, that both systems are perceiving the same world. Maybe not in the same ways. In fact, absolutely not in the same ways, but it's interesting to think that we can make both parts of a team, the human, the machine team, a human human team, a machine machine team, make sure that we're able to make those partners able to perceive the same kind of world, in their own special ways.

PP (01:02:37):

And then when they use that limited channel, when they use the few bits, they can pass over that choke point, they can use the most efficiently to communicate high level information or communicate not just the raw material, but actually communicate high level thoughts, commands, information. The machine can say, 'Hey, you know what? You're reaching for a stove. And I've got heat sensors. I've got heat, range-finding heat sensors. And I can say, 'It's going to be really, really hot.' Communicate 'Oh, it's going to be hot across that limited channel,' " instead of all of the information that it's perceiving. I think it's a good way to start managing choke points and more efficiently using the bandwidth that we have available in these partnerships.

EW (01:03:12):

Yes. I have so many more questions and we're starting to run out of time. [Laughter]. And I'm looking at all of my questions, trying to figure out what I most want to ask you about. But I think the most important thing is, I can't be the only one saying, "Oh my God, I want to try it. I want to try it." How do people get onto this path of robotics and intelligence? What do they need to know is prerequisite? And then how do they get from a generic embedded systems background with some signal processing to where you are?

PP (01:03:46):

That's actually, I think that when we, when we're moving forward with trying to implement things, the barriers are actually more significant in our heads than they are in actual practice. So in terms of getting up and running with, let's say a reinforcement learning robot, you want to build a robot that was able to...you could give it reward with a button and it could learn to do something. It seems like that's actually this gigantic hurdle. I think it's probably not.

PP (01:04:11):

So in terms of just going from no experience with machine learning to, "Hey, I've got a robot and I'm teaching it stuff." My usual steps, the first step is I think it's actually a, I like to say, get to know your data. So usually when people come to me and say, "Hey, I want to start doing machine learning, any kind, supervised learning, learning from labeled examples, reinforcement learning. I want to start doing machine learning. What should I start with?" The thing I usually suggest is, you know what, don't actually try to install all those packages. Don't try to figure out which Python packages or which fancy MATLAB toolboxes you want to install.

PP (01:04:46):

I usually point them in the direction of something like Weka. It's the data mining toolkit from New Zealand. It's a free open source Java toolkit. It has almost every major sort of supervised learning, machine learning method that you might want to play with. And I usually say, "You know what? The data, pick a system that has some data and get to know your data. Use this data mining toolkit, and take your data out for dinner, get to know what it does, what it likes and get to really understand the information, and the way...the many different machine learning actually work on that data."

PP (01:05:19):

And it's as simple as just pushing buttons. Like you don't have to worry too much about getting into the depth or actually writing the implementation code. You can just play with it. Once you get to know a little bit about how machine learning works, either you say, "Hey, this technique is perfect for me." Then you can go and deploy it. You use the right package from one of your favorite languages.

PP (01:05:35):

But you can also then start to move into other more complex things. OpenAI Gym is another really great, great resource. OpenAI Gym is a new platform where you can try out things like reinforcement learning as well. My students have been using it, and it's really pretty functional in a very quick amount of, in a quick ramp-up cycle.

PP (01:05:55):

So people can get very familiar with, again, with the machine learning methods, without having to spend a Herculean amount of effort implementing the actual details. That's I think the part that will scare people off. But in terms of going straight to a robot, this is what, I'm actually teaching an applied reinforcement learning course at the university right now.

PP (01:06:14):

It's the first time we're teaching the course, part of the Alberta Machine Intelligence Institute. We're trying to ramp up some of the reinforcement learning course offerings. And what's really cool about this is that the students come on the first day of class, they get a pile of robot actuators, like two robot bits. In this case, their robot is Dynamixel servos. They're a really nice, pretty robust, hobby-style servo that also has sensation in it.

PP (01:06:36):

So they have microcontrollers in the servos. The servos can talk back and say how much load they're experiencing, where their positions are. You talk to them over a USB port and right away, you can just start controlling those robots. So the robot bit is really simple. One Python script that you can download from the internet, and you're talking to your robot, you're telling it to do stuff and it's reading stuff back. And then the really cool bit, that if you want to start doing reinforcement learning, you want to implement that, it's actually only about five lines of code and you don't need any libraries.

PP (01:07:03):

So you can just write a couple of lines of Python code, and you could actually have that already learning to predict a few things about the world around it. You could learn that it's moving in a certain way and you could even start rewarding it for moving in certain ways. So the barriers are actually pretty small. So again, in terms of a pipeline, first, don't try to implement everything right away.

PP (01:07:23):

If you want to do some machine learning, go out and try some of the really, the nicely abstracted machine learning toolkits out there like Weka, or maybe the OpenAI Gym, if you want to get a bit more detailed. And then after that, go right for the robots, the robots now are very accessible and it's not a hard thing to do. And again, if you want those five lines of code, hey, send me an email. I'll send them to you. [Laughter].

EW (01:07:44):

I do. I may request those for the show notes just because that's pretty cool.

PP (01:07:49):

Awesome.

EW (01:07:49):

Yeah. Wow. Okay. Well, excuse me. I need to go buy some robot parts.

PP (01:07:59):

And they're not even that expensive anymore. The world is getting so exciting.

EW (01:08:02):

Isn't it? So how are we going to learn to trust our robotic overlords?

CW (01:08:09):

They'll have to reprogram us.

EW (01:08:10):

[Laughter].

PP (01:08:10):

They'll reprogram us. It's great. I was like, ah, no, no. They'll have our best interests in mind. I think it'll be fine.

EW (01:08:14):

[Laughter]. It'll be fine.

PP (01:08:16):

Every time I'm asked about this, I'm like, "Oh, you know, I think maybe...that's probably one of my closing thoughts." I know you're gonna ask me for closing thoughts. And one of them is..."don't panic. It'll be cool."

EW (01:08:23):

"They'll be nice."

PP (01:08:23):

And the reason I say that is...you know...I have a puppy, our puppy, I treat our puppy really, really well...I don't mistreat our puppy. I take him out for lots of walks. I give him treats, we just bought him a new couch so he can sleep.

PP (01:08:37):

...I have a hope that when someday there's a superintelligence, much smarter than us, that it'll buy me a couch and take me out for walks and give me treats and buy me Netflix subscriptions. [Laughter]. So I think probably that's my high level picture is, you know, don't panic. I think it's actually gonna turn out okay. I think superintelligent systems will be, with the increasing intelligence, will come increasing respect and increasing compassion. So I'm actually not, I'm not worried. I think Douglas Adams had it right with the big friendly letters. Don't panic.

EW (01:09:07):

And now I'm like, well, what about the dog and cat photos? I mean, are we just going to be, are they going to take pictures of us and say, "Oh, that's so cute."

PP (01:09:19):

Show it to the other superintelligences in the cloud?

EW (01:09:21):

[Laughter]. Yes. "Look at my humans."

PP (01:09:21):

Look at what my human did today. My human tried to do linear algebra. Oh man. My human tried to solder wires together. It was so cute. Oh, it's just so quaint." Yeah, exactly. Who knows? Maybe it will be like that. I hope they're supportive and they buy us nice toys. And when we're trying to, you know, do our linear algebra and solder our wires...[laughter].

CW (01:09:37):

Huh.

EW (01:09:40):

Christopher doesn't look convinced, do you have -

CW (01:09:41):

I'm not sure I appreciate that future. What?

EW (01:09:45):

Do you have any more questions or should we kind of close it on that?

CW (01:09:49):

We should probably close it on that, I don't think I can...

EW (01:09:55):

[Laughter]. Alright. Patrick, do you want to go with that as your final thought or do you want to move along?

PP (01:09:58):

I will go with that. My final thought is "Don't panic. It's all gonna work out."

EW (01:10:02):

Thank you so much for being with us. This has been great.

PP (01:10:05):

Hey, thank you. It's been awesome. It's been a great conversation.

EW (01:10:08):

Our guest has been Patrick Pilarski, Canadian Research Chair in Machine Intelligence for Rehabilitation at the University of Alberta, Assistant Professor in the Division of Physical Medicine and Rehabilitation, and a principal investigator with both the Alberta Machine Intelligence Institute, AMII, and the Reinforcement Learning and Artificial Intelligence Laboratory or Laboratory, depending on how you say it.

EW (01:10:36):

Thank you to Christopher for producing and co-hosting, thank you for listening, and for considering giving us a review on iTunes so long as you really like the show and only give us five star reviews. But we really could use some more reviews.

CW (01:10:51):

What are we, Uber?

EW (01:10:51):

Go to embedded.fm if you'd like to read our blog, contact us, and/or subscribe to the YouTube channel. And now a final thought from you. The final thought for you, from Douglas Adams -

CW (01:11:04):

No, we're just going to sit here in silence, waiting for their final thought to come in.

EW (01:11:08):

It might work.

CW (01:11:08):

Send us your final thoughts.

EW (01:11:11):

That sounds morbid.

CW (01:11:12):

[Laughter].

EW (01:11:16):

From Douglas Adams. "Don't Panic." Alright. That's all I got...I apparently didn't finish the outline. [Laughter].

PP (01:11:27):

That's awesome.

EW (01:11:27):

Normally there would be a robotic quote in here, but I think we're going with "Don't Panic." Alright.

EW (01:11:35):

Embedded is an independently produced radio show that focuses on the many aspects of engineering. It is a production of Logical Elegance, an embedded software consulting company in California. If there are advertisements in the show, we did not put them there and do not receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.