Embedded

View Original

437: Chirping With the Experts

Transcript from 437: Chirping With the Experts with Daniel Situnayake, Chris White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. We are going to talk about making things smarter. And by things I mean not us. I mean AI at the edge, making devices smarter. Our guest is Daniel Situnayake, author of the new O'Reilly book, "AI at the Edge."

CW (00:00:25):

Hi Daniel, welcome back.

DS (00:00:26):

Hey, thank you. It is so good to be back, and really, really excited to be here today.

EW (00:00:32):

You have been on the show before, but it was long ago. Could you tell us about yourself, as if we met at an embedded systems conference?

DS (00:00:40):

Yeah, I am the Head of Machine Learning at a company called "Edge Impulse." We build tools for basically making it easy for non machine learning engineers to build stuff using machine learning and AI. I have been working at Edge Impulse since we were three people. We are now, I think, 90 at this point, between 80 or 90, I have lost track.

(00:01:06):

Before that I was working at Google, on a team that was building TensorFlow Lite for microcontrollers, which is basically the Google open source library for running deep learning models on embedded devices. So really excited about running deep learning and classical machine learning on tiny devices, but also about making that easier for people.

EW (00:01:32):

Is it actually easier for people, really now?

DS (00:01:35):

Oh, absolutely. When we last spoke, I think I was just off the treadmill of writing this previous book, "TinyML" with Pete Warden. We had gone out of the door trying to write this book that was like a 101 introduction to the idea of embedded machine learning. We ended up with this book that is a couple of inches thick, and this is less than what you need to know to get started, really. It is like a overview, and a deep dive into certain sections. But really, if you wanted to build a legit project using embedded machine learning, there is a lot more to learn beyond that.

(00:02:25):

So over the last three years or so since we wrote that book, things have become so much easier. That really, if you are a domain expert in a particular thing, whether it is understanding the health of machines in an industrial environment that you might want to monitor, or if you are building a human computer interaction device that needs to understand some keywords people are saying, or all sorts of different things, it has got to the point where you can do all of this without writing any code.

(00:02:59):

You can pretty much point and click at things. You still need a bit of understanding of what you are doing, but there is so much content out there now that you can use to help learn, that we are really off to the races. There are obviously huge amounts of work that needs to be done, to continue to make this stuff easier and more accessible, and help make it harder to make some of the common mistakes that people can run into. But it is really night and day, compared to where we were before, which is very exciting.

EW (00:03:29):

So your previous book was TinyML. What was the cover animal on that?

DS (00:03:35):

It was a little hummingbird. I forgot the name, the scientific name, but, yeah, it is a little red hummingbird.

EW (00:03:45):

Okay. We want to do lightning round, where we ask you short questions. We want short answers, and if we are behaving ourselves, we will not ask more detailed things.

DS (00:03:54):

<laugh> Sounds good.

EW (00:03:55):

Christopher, are you ready?

CW (00:03:57):

Do we need to warn him where some of these came from?

EW (00:03:59):

No.

CW (00:03:59):

Okay.

EW (00:03:59):

I mean, if you want to.

CW (00:04:01):

What are some common applications of machine learning in the real world, and how might they relate to the interests of crickets and other animals?

DS (00:04:08):

<laugh> All right, so machine learning. In the real world, machine learning has been used mostly for analysis of data, until recently. So you imagine you have got a business and you are selling things, and you want to understand what kind of factors can predict whether you are going to sell more or less of the thing you care about. You could train a model on all of your data about sales, and come up with some insights or come up with a way to predict whether a new product is going to sell well or something like that.

(00:04:40):

But what we have been able to do in the past five years or so, is start doing machine learning on sensor data on edge devices. So we are actually out there in the real world, understanding this data that is being generated by sensors. If you can think of any place where a human might be able to provide some value by being out there and keeping an eye on things, what we can do now is potentially automate some of that insight that a person can bring.

(00:05:18):

So imagine you have got a bunch of machines in a factory, and you want to know when they are showing warning signs of maybe being worn out and needing replacing. One thing you could do is send out an engineer to go and listen to all the machines, and have a look at them and poke around, and see whether something looks bad. But another thing you could do is train a model, using insight from that engineer, and then deploy that model out to devices out in the field, and have the devices tell you when they get into that state, which allows you to scale much better. Because instead of just having your one engineer who is working eight hours a day, you can have a device on each of these machines, and you can catch the problems immediately. So you can scale that or move that around to loads of different applications.

(00:06:09):

One thing that has been really popular is with conservation researchers putting models onto cameras that get installed out, nailed to a tree in the middle of the rainforest. Whenever a particular type of interesting creature walks past, it takes a picture of that creature, and then reports back via a low bandwidth wireless connection that it saw that creature. So instead of having to have people fly out and collect memory cards from these devices, to see which animals walk past, you can get the information instantly.

CW (00:06:44):

That is an excellent answer. And you have failed lighting round.

EW (00:06:49):

Well, I think he missed it because it is supposed to be short answers. And I think he really missed the part about how-

CW (00:06:55):

He brought it back to animals.

EW (00:06:57):

To animals, but I think he missed one particular-

DS (00:06:59):

Oh, I missed the crickets. Yeah.

EW (00:07:00):

So let me ask you a different question, still in lightning round. What are the biggest challenges facing the use of machine learning by crickets?

DS (00:07:09):

By crickets? So <laugh>, I think, really they are quite small, and do not have good access to technology. But if we can figure out that part <laugh>.

CW (00:07:26):

How are your crickets?

DS (00:07:28):

Well, luckily for me, I have not seen a cricket for years, at this point.

EW (00:07:34):

That is good.

DS (00:07:34):

Luckily for the crickets as well, I think.

CW (00:07:37):

<laugh>

EW (00:07:38):

Which is more important? Dataset or algorithm?

DS (00:07:42):

Oh, that is a- Actually no, it is definitely dataset.

CW (00:07:47):

Which is more important? The training set or the test set?

DS (00:07:51):

The test set.

EW (00:07:53):

Do you have a favorite fictional robot?

DS (00:07:55):

Oh, ah, there are too many, but either the robots from "Batteries Not Included"- Have you guys seen that?

CW (00:08:03):

I saw it as a young person, but I do not remember much of it.

DS (00:08:05):

Yeah, that was one of my favorite things, these cute little self-replicating robot creatures. Or maybe, Johnny 5.

CW (00:08:17):

Iberian ibex or hummingbird?

DS (00:08:20):

Ooh, that is a good question. I think I will have to pass on that one.

CW (00:08:28):

<laugh>

EW (00:08:30):

And, do you have a tip everyone should know?

DS (00:08:32):

Ooh. If you are writing a book, and you are trying to figure out how long it is going to take, add 50% to your estimate.

EW (00:08:42):

That is very conservative <laugh>.

DS (00:08:44):

Yeah <laugh>.

EW (00:08:45):

I mean, that is wow. That is actually really good. Most people, it is two or four <laugh>.

DS (00:08:51):

Oh yeah. I guess it is if you are writing your second book <laugh>.

CW (00:08:54):

Yes. <laugh>.

EW (00:08:56):

How was writing your second book different from writing your first?

DS (00:09:01):

It is very different, in that you know what the process looks like, and you know what is going to happen at each stage. But then that opens you up to a whole bunch of new risks, where you think you are going to be done at a certain point, by a certain time, but you have massively underestimated it, and end up having to work a lot of late nights. So I think it does not really help that much, but it is definitely less stressful to know upfront.

EW (00:09:36):

Agree. It would be less stressful to write another book, understanding how the art works. Because I was so stressed about that in the first version.

DS (00:09:44):

Oh yeah. Oh my God. Well, this time- Last time I spent loads of effort making the diagrams perfect. And then I found out that the publisher could have just done them for me, based on sketches. And this time I did the same thing again for some reason, because- Well, I liked the way they looked in the previous book. And then they ended up redoing them for me anyway. I guess I just do not learn.

EW (00:10:09):

So the book that is coming out in mid-December is "AI at the Edge," not "AI on the Edge," no matter how many times I say it wrong.

CW (00:10:18):

<laugh>

EW (00:10:22):

Could you give us the 10,000 foot view?

DS (00:10:27):

Yeah, absolutely. So I think as this technology matures and becomes more available for use as a tool in people's toolbox, there are all sorts of questions that come up around how do you actually use it? What are the best practices for designing products that make use of Edge AI? And Edge AI itself is this new, newish, term and needs to be defined. Then within that, no matter what your field, whether you are a machine learning expert, or an embedded expert, there are going to be huge chunks of this field that you do not necessarily have direct personal experience with.

(00:11:07):

So what we try to do with this book is, create a book that basically gives you an understanding of everything that you need to know, if you are going to be building a product in this field, or working adjacent to this field. So it does not necessarily teach you everything you need to know, because that would be like an encyclopedia size book, but it at least gives you an understanding of what all the pieces are, and maybe even who you need to hire or who you need to work with, to make these kinds of projects happen.

EW (00:11:38):

Where the TinyML book was pretty tactical, pretty hands-on, code was on most pages, the first third of this book, there is no code.

DS (00:11:49):

Yeah, it is really designed to be workflow oriented. The goal is to give you first the background you need to know about all the moving parts, what they are, how to think about them. And then to go through the workflow end-to-end of working through an idea, understanding whether that idea makes sense for this type of technology, and if it does, what particular pieces of this technology are useful for it. And then going through this iterative workflow of designing something, training models, testing them out, understanding how they work in realistic scenarios, until you have got something approaching a product.

(00:12:32):

The other really important thing there, is looking at responsible AI as you go through all of this. Obviously when you are designing any product, you need to think about ethics and responsibility. But with AI and machine learning, you really need to, because it is a core part of the workflow. So the book tries to weave that in through every stage, also.

EW (00:12:56):

One of the solutions that seemed to be presented in the book, was basically hire an ethicist, who can give you a better insight into how ethics and AI go together. Is that the only option?

DS (00:13:17):

I do not think it is the only option, but realistically, if we are talking about building products that are supposed to scale, right? To take a step back, I would say the whole point of AI is to take human insights, and multiply it by factors that we could not do with people. What we are also doing in that process, we are making it cheaper to apply human insight in different situations.

(00:13:43):

But we are also decoupling the complete human insight from this little piece, that we are going to go and scale up. So any situation where we are doing that, it is really important to make sure that it is done in a way which does not cause harm. Because, by taking the person out of the equation, you are introducing a risk that the system is going to do something, that a person would not do in that situation.

(00:14:13):

A good example of this, that we use in the book, is the idea of a driverless car. It is incredibly important to get that right, if you are going to try and do it, because the risk is that your system is going to potentially put human lives at risk. In that light, with the seriousness of the situation, I would hope that any organization that is working with machine learning or AI, in a context where it could put people's lives at risk potentially, would see it in their budget to hire somebody, who can really understand the implications of that, from a harm reduction and ethics perspective.

(00:15:00):

Obviously smaller, more trivial things, that have less of an impact on the world, maybe you are making a toy, that is designed for bored engineers to play with on their desk, maybe you do not need to worry so much about this type of thing. But then in that same example, imagine you are building a children's toy, and you have not fully thought through the ethical implications of what this AI powered feature on this toy is doing. That really needs to be part of your work.

(00:15:32):

A good way to think about this is, that you really need to know what you are doing, right? Part of knowing what you are doing, is making sure that your team contains domain experts, who are guiding the development of the product. And typically one of these domain experts, should be somebody who understands the ethical and responsible elements of this work that you are doing.

EW (00:16:02):

In that chapter, which I think was chapter five, where you talk about building a team for AI at the edge. You had a team that, forget the AI part, I want that whole team anyway. I want somebody thinking about the ethics, I want the product manager thinking about the customer's view, as well as, when are we going to ship? I want the tester. I want the engineers that can review things. I want CEOs who actually understand how the market is going to work.

(00:16:35):

I wanted the team and it was not related to AI. Why did you feel that that was an important part of the book, instead of just saying, "Make a reasonable engineering team, and stop trying to do everything yourself"?

DS (00:16:51):

Yeah, that is a good question. I think that we all come from different perspectives, as engineers or product people or business people, or whatever role that we might have in organizations. Depending on our own history, we might not be aware of all of these different pieces. I certainly was not aware of all the different parts that come into building, for example, an embedded project, until I started getting deeper into the world of working with embedded engineers. So I think it is important to have a baseline of, "What does a good team look like?"

(00:17:29):

One of the real things there to think about is obviously the tools that exist, to create a company and create a product, are getting more and more accessible over time. So it is easier and easier to make things and put them out in the world, and have them interacting with people.

(00:17:53):

It is easy to do that, but the knowledge about how to construct a team to do that well, and do that safely, does not come for free with the tools. That comes through experience and insight, and all of that good career stuff. What we do not want really is, that because the tools are accessible, it means people who have an interesting idea, but do not necessarily think through all of the implications, are going to go out there and try to build it, and then either fail because they missed some important stuff that they needed to be aware of, or cause harm in the world by doing something that was ill-advised.

(00:18:36):

So even just from a selfish point of view, if you are trying to launch a product and you think you have got a great idea, but you do not know anything about one particular portion of the process of developing a product, you got to figure that out somehow. Hopefully by covering that in the book, we at least point you in the right direction. Like I was saying, it is not an exhaustive resource for how to hire a good program manager or something like that, but at least if you did not know that program managers exist, you now know there is a name for them. And you could go out and hire for one if they do the kind of thing that you need on your team.

EW (00:19:16):

When you say tools, you mean the AI tools? And when you say tools, I hear compilers, because all of that is true with just someone with the compiler. It does not have to be AI, but-

DS (00:19:29):

Yeah, absolutely.

EW (00:19:30):

You do have so much definitional- Is that a word?

DS (00:19:36):

Sure.

EW (00:19:39):

Definitional information in the book about sensors, which is all the embedded parts, as well as algorithms to deal with sensors. And then there is the AI, different methods, neural nets, and even heuristics you were calling AI. How do you- What is AI? Is it always- I guess these days it is mostly machine learning for me, but what do you define it as?

DS (00:20:10):

Yeah, it is an interesting question, and there is so much that it is almost like a philosophical question. Well it is not almost, it kind of is. You can go down to like, "Okay, what is intelligence?" And when you read about what people who are at the cutting edge of answering this kind of question say, there is not a clear answer. Everybody has a lot of differences.

(00:20:33):

But what I wanted to define for this book, as a really basic way of putting this in terms of what is artificial intelligence, I try to answer, "What is intelligence?" I think intelligence is knowing the right thing to do at the right time. That is a very trivial definition. It is not going to stand up to someone who is a philosopher. But in terms of when we are talking about intelligence in engineering products that do things, if we want to add intelligence to a product, it usually means we want the product to be better at knowing the right thing to do at the right time.

(00:21:13):

So artificial intelligence means knowing the right thing to do at the right time, without a human being involved. I quite like that little definition because it is tidy, it sidesteps all of the stuff about computers being sentient, and things like that, and just lets us focus on the basics.

(00:21:32):

Which is that, if I am creating a thermostat for my house, that learns the best time to turn the temperature up and down, that is not necessarily going to involve deep learning. It is not necessarily going to involve some really fancy code. But I would argue that it is AI, because we are using a computer to do something which we as humans would think is kind of intelligent.

(00:21:57):

So the goal with the book, right, is that we want people to build good products. We do not necessarily want to direct people, and funnel them to like, "Hey, you have got this idea. You should immediately train a really fancy deep learning model to try and implement it." Because quite often, in fact the majority of the time, we find as practitioners in this field, you can get some pretty good results using heuristics and using rule-based logic.

(00:22:28):

If you are a machine learning engineer, and you have just had an amazing idea for a gadget you want to build, maybe your first instinct is going to be to go and train this fancy model. But what we want you to do, is take a step back. And have a look at what is the most basic thing that you could call intelligent, and that someone would pay money for because it makes their life easier, because maybe that is the place to start.

EW (00:22:53):

Did you have a particular reader in mind? For me, it helped me to have actual people I was thinking about, so I could think about what their skills and their background were, and how much math they needed. Did you have somebody like that?

DS (00:23:06):

Yeah, we wanted to be as broad as possible, because Jenny and I- Jenny Plunkett is my co-author. We have done a lot of work through Edge Impulse, and through other jobs, working with people who want to apply AI to solve problems with embedded systems. What we have found is there is not really a typical person, who is the person directing this kind of work.

(00:23:34):

We have talked to everyone from hardcore embedded engineers, through to machine learning engineers. People who have written books about either of those things, but do not know anything about the other one. All the way through to people who are totally business focused or product focused.

(00:23:51):

What we saw is that it does not matter what someone's background is, there are certain things that they need to know, in order to think rationally about this type of space, and know how to solve problems with it. So we try to make something that would be a useful reference, and provide some good insight to people who are building in this space, whether or not they are highly technical or business focused or so on.

EW (00:24:25):

How much math do people need to use AI techniques? Is it enough to be able just to use them, even if you do not understand the gradient descent, and all of the mathy bits?

DS (00:24:39):

Yeah. I would be the first to admit that I am not a good mathematician. But I think it is okay. There is huge value to knowing and understanding some basic maths concepts, when you are working with machine learning. For example, to understand how models are performing, there are a few really important evaluation metrics, which basically give us numbers that describe whether a model is doing its job well at maybe, telling you whether something is a lion or a rat that is walking past in the jungle. So having some insight into mathematics is really helpful with interpreting those types of numbers, and understanding what they mean, and how they apply to your situation.

(00:25:30):

Also, it is helpful to understand a bit about things like probabilities, because that is something that is inherent to a lot of these types of technologies. If we are doing signal processing as well, that can be really valuable. So if you are a signal processing expert- I think people who are DSP experts are so well suited to working with embedded machine learning, really. They are the natural users of this type of technology, so that can be a real advantage.

(00:26:02):

But there are really cool tools out there, like Edge Impulse, that make that stuff easier, so that you can just plug in pre-created signal processing algorithms, and use those to interpret your signals and feed into models.

(00:26:18):

So it is useful. There are big advantages to learning some mathematical concepts, and getting better at understanding that type of stuff, but it is definitely not a barrier to entry. I would say, a typical embedded engineer is absolutely mathematical enough, to be able to work with this technology.

EW (00:26:41):

You mentioned Edge Impulse, which is the company you work for? Did you found it?

DS (00:26:48):

No, unfortunately not. I was beat to it by the actual founders. Zach Shelby and Jan Jongboom, who were working at ARM at the time, realized that there is this need for much easier tooling to help people build solutions with this type of technology. I was coming to that same realization at Google, writing this book with Pete that ended up being really, really long, and so-

EW (00:27:19):

The TinyML book, not the current book.

DS (00:27:20):

Yeah, the TinyML book. Exactly. I think we were all in the field at the time, realizing that there are different ways to go about this, but we need to make it easier for people to use this technology.

CW (00:27:35):

Do you feel- I think you have touched on this, but there is a tension between making things easier and the ethical angle. You said, "If you you make things easier, then people are set up to fail." But I also feel like people could be set up to use it irresponsibly, without knowing what they are doing. Is there a place for- At the risk of solving social problems with technology, is there a place for making the tools in some way easier to evaluate ethical questions or something? I...

DS (00:28:13):

Absolutely. I think that one of the things that has motivated a lot of people who have entered this field, is it is almost a chance to start again, and reset and do things better this time around when it comes to AI, and responsible AI and ethics. Because we are solving a whole different bunch of problems, with different types of data.

(00:28:38):

We have got the opportunity to build tools that have some of this stuff baked in. For example, one thing that is really important in building AI that works responsibly, is making sure that your data set contains data that reflects the type of diversity that is there in the real world. It is not just the kind of diversity we are talking about with human beings, and races and ethnicities and genders and things like that. But diversity in terms of there is a lot of difference and heterogeneity in the real world. Any data set, used to train a system that is going to be working in the real world, needs to reflect that.

(00:29:22):

So we can think about things like, what are the tools that we need to build that make it more obvious to people, when their data sets do not reflect that type of heterogeneity. If we can look at your data set and we see, "Oh, there are these types of things and these types of things. And there seem to be these types of things as well. But for those ones, you do not have very many samples. Have you thought about going out and collecting more?" Giving people that type of input and guidance, can be really valuable in helping people avoid making these sorts of mistakes.

(00:29:57):

But what it cannot do is stop people doing bad things on purpose. Building applications that are just harmful by nature. However, we can do things like, our Edge Impulse software user license is something called a "Responsible AI License." If you search for "Responsible AI License" online, you can find that. We expressly prohibit people from using our platform or our technologies, for building certain types of applications, for example, in weapons, and surveillance. So there are things that you can do even on that level, license-wise, to limit what people are going to do with your products. I think it is really important to think about that from the beginning.

EW (00:30:47):

In the first third of your book, there was very little mention of Edge Impulse, and in the last third, there was quite a lot, as it went through specific examples showing the software on the page, on the screen. Was it hard keeping Edge Impulse out of every page? Sort of the, "And we can help you with that!" sort of...

CW (00:31:11):

<laugh>

EW (00:31:12):

"I have a solution to many of these problems. Stop reading here, and go play with it."

DS (00:31:18):

Yeah, it was really conscious decision we made actually right at the beginning, to avoid making the book tied to a specific tool. That is why we chose to separate the vast majority of the content is general and universal. And then we have these specific practical chapters at the end, which are supposed to demonstrate the principles we went through in the rest of the book, but using a particular tool, that happens to be the tool that Jenny and I have helped build.

(00:31:51):

The reason we did that was because it is horrible when tech books come out, and then six months later they are out of date. It is inevitable to some degree, but a lot of the stuff to do with the machine learning workflow, and how to work with AI responsibly, and how to build good products is general. It is going to change a lot more slowly, than the stuff that relates to any individual tool.

(00:32:17):

If you pick up this book a year from now or two years from now, that majority of that first part of the book is going to still apply. Maybe the latter parts might need updating. But that is okay, because it is only for demonstration purposes, really.

EW (00:32:38):

How much of the book was part of your job?

DS (00:32:43):

I wanted to make sure that it was all doable within spare bits and pieces of time, while I was working. So the original goal was to spend an hour or a few hours on it a week, over an extended period of time. It really end up being a lot more work than expected <laugh>. So I spent vastly more time, of my spare time, on the book, and not that much work time in the end. Same with Jenny.

(00:33:13):

But I think it is really, really fortunate to have an employer that is willing to support you on a project like this. I am super grateful to the company, for letting us use any of our work time on building it. It has been really cool.

EW (00:33:31):

We have not really talked about the "edge" part of this, because the show is about embedded systems, or at least that is what they tell me. We understand, we are making smarter systems all the time, our processors get more powerful. But there are reasons to put more intelligence at the edge, and there are reasons not to. You have an acronym that talks about why you would want to put things at the edge. Could you go through it?

DS (00:34:01):

Yeah, absolutely. And I love if I could take credit for this acronym, but actually it was an invention of Jeff Bier from the "Edge AI and Vision Alliance," and it is absolutely brilliant. Because it is a stupid sounding word, BLERP, and the letters of the acronym are B L E R and P. Each of those gives argument for situations where edge AI makes sense. So I can go through those, if you want.

EW (00:34:39):

Oh, the first is bandwidth. And that totally makes sense to me. I mean, that you were talking about animal pictures. You do not want to send every picture up. You want to say, "Oh, I saw 23 rabbits and one cheetah."

DS (00:34:56):

Exactly.

EW (00:34:57):

And then maybe you get the picture of the cheetah, because that is more interesting than your 23 rabbits.

DS (00:35:02):

Exactly.

EW (00:35:05):

Latency is the second one, right?

DS (00:35:08):

Mm-hmm <affirmative>.

EW (00:35:09):

And that is when you do not want to have to talk to the great internet. You want to give a response quicker. So when I have a pedometer or a smart watch, I do not want it to have to send all of my data up to the internet, and then get a response. If I took a step, I want it to say, "I took a step."

DS (00:35:29):

Exactly.

EW (00:35:31):

The next one is economics. I did not get that one as well. Could you explain it? Because to me it is more expensive to put intelligence on the edges.

DS (00:35:40):

Yeah, it is really interesting. Economics- The one thing to remember is, not all of these need to apply for a given application to make sense. So you probably need at least one of them, if it is a really strong one, maybe a couple. Some of them come together quite nicely, like bandwidth and privacy for example, which we will go into later. But, yeah, you do not have to have all of these in every project.

(00:36:08):

So there will be some places where economics could go one way or the other. But potentially, if you are sending data from the edge, that can be very expensive. Whether you are having to ship a device that has a 3G connectivity on it to send up video, and you have got to pay per kilobyte that you are sending.

(00:36:34):

Or maybe if you are doing the sneakernet thing, and you have a device out in the middle of the rainforest taking pictures, and you have to fly someone out there every six months, pick up the memory card, all of that stuff costs money. By reducing the amount of data that needs to be sent around, by making some of these decisions on the edge, you can save costs.

(00:36:55):

Also, it potentially can reduce costs on your server side processing. So for example, if you are trying to spot keywords in a bit of audio, you can either do that on device, or you can do it on the server. If you are doing it on the server, you have got to then provide a server, and keep it alive for the entire time that your product is being used. If your company goes away, then maybe your product will stop working. Whereas if you are doing that stuff on the edge, you no longer need a server, you no longer need the people required to keep a server running, you do not need to keep it up to date, you do not need to have this security liability. So that is where it can come in, as well.

EW (00:37:48):

That also comes into the next one, which is reliability. That it can be more reliable to have smart things on the edge, if you do not have a reliable connection or a reliable company. Are there other ways that reliability plays into it?

DS (00:38:10):

Yeah, absolutely. If you have got any kind of system that is dependent on a connection somewhere else, that is just another point of failure. And it involves a whole chain of complexity, that goes from the device all the way back to the far side. It is always good to have less complexity. There is complexity involved with machine learning and AI, but it is a different type of complexity, and sometimes it is worth trading one for the other.

(00:38:42):

So potentially maybe it is better to have this model that is running on a device, versus depending on there being a reliable wireless connection. A good example of that would be a self-driving car. Imagine you are driving through an intersection, and suddenly your self-driving cuts out and you are sitting in the middle of the intersection with cars streaming around you. That is the least bad thing that could happen. Whereas if all of the AI stuff is happening in the car, that is much less likely to happen.

EW (00:39:17):

And then the last one was privacy, which we kind of touched on with bandwidth. But mostly I do not want you sending all of my data across the internet, because I do not trust that it will be private forever.

DS (00:39:28):

Exactly. This is my favorite one I think, because beyond all of the others, privacy unlocks things that are just not possible before. You can brute force all of the others. Like bandwidth for example, you can always throw money at something and give it more bandwidth. Latency, same kind of thing. If you can move your server close to the edge, even if it is not at the edge exactly, you can reduce latency. All of these things are things that you can throw money at and engineer around.

(00:40:03):

But privacy, if something is a privacy concern, it just is a privacy concern. If there are privacy concerns with having certain types of sensors capturing data in certain places and sending it elsewhere, then that problem is not going to go away by throwing money at it.

(00:40:24):

For example, having security cameras in your house. I personally would not feel comfortable having a camera in my home, that is streaming video and audio to some remote server. I do not care how cheap it is, I do not care how supposedly reliable this company is, it is just a line I do not really want to cross. Whereas if you have got a little model that is running fully on device, doing whatever it is that is supposed to be done by that device, and you have a guarantee that no data will ever leave the device, I do not care whether the camera is there.

(00:41:01):

So that opens up things like maybe I can have the lights shut off automatically when I leave the room. I do not really want that to happen, because I was broadcasting video of my bedroom to the internet. But I am quite happy to have a little addition to my light switch, which has a camera built in, but has no way to have data leave the sensor.

EW (00:41:26):

Going back to reliability, and we touched on this before, AI does not always work. Safety critical systems like cars are definitely part of it, and that is a big thing. But one of the worst parts about AI on the edge locally for me, is that it works until it does not. So it feels unreliable. It ends up being more frustrating than if it does not work. Like, we have a garage door opener that 90% of the time opens when I tell it to. The other 10% of the time, I have to hop around and do some sort of dance in the driveway, in order to get it to work.

CW (00:42:09):

<laugh> It is the 95% that is worse, than if it just did not work at all.

DS (00:42:14):

Exactly.

EW (00:42:15):

Yeah, if it did not work at all, I would carry the button.

CW (00:42:20):

95 sounds good at first, but that is one out of every 20 times it does not work. That is a lot.

DS (00:42:26):

Unbelievably frustrating. Yeah. And I am sure everybody has had this sort of effect with voice assistance, which I have done a lot of work in, in the past. Where you are able to get it to work, I would say 95% would be a bit generous. But most of the time it will understand what you are doing. But when it does not, it is embarrassing, because you are literally speaking, and then something does not understand you and does something else. As human beings, that is inherently a horrible feeling. So yeah, totally see that.

EW (00:43:03):

As an engineer, I totally understand that the words "on" and "off" are essentially the same word.

DS (00:43:09):

<laugh> Exactly.

EW (00:43:12):

But as a human, there are some important differences <laugh>. As we work on the edge, we have constrained models. This goes back to the economics of small devices, and making sure they are low power. So we have to be constrained. That is a normal embedded systems thing. But balancing that trust and reliability against...

CW (00:43:39):

Privacy.

EW (00:43:40):

Well, I was thinking more against the economics of constrained models on devices.

CW (00:43:44):

Sure.

EW (00:43:44):

I understand that there is a trade-off there, as an engineer. But as a user, that trade-off has always been wrong. 95 is not good enough.

DS (00:43:58):

Yeah. For me, this is where all of this extra stuff around who is on your team really comes into play, because you need to have a team building the product that you are building, that has insight into this type of stuff. What would be acceptable to have 95% accuracy for? What would be frustrating to users?

(00:44:20):

There is no need that these things even need to be particularly accurate sometimes. For example, if I have a situation where I have got to identify whether a machine is about to break down, because if it breaks down it costs me a hundred thousand dollars to fix it. If I can improve my ability to understand when it is about to break down, even by 10%, that is going to save me so much money, that it becomes a no-brainer.

(00:44:55):

Whereas if I am in a different situation, where I am trying to let someone into their house using face recognition, and 10% of the time it is letting the wrong person in who should not be allowed in, that is a totally different kettle of fish.

(00:45:09):

So having some good understanding of the situation that you are trying to work within, and really from every angle, and how your solution is going to impact that, is absolutely critical. You need to be building your projects in a way, that you build in this sort of go and no go decision at various points within, and try as early as possible to get to it. So you need to find out very early on, whether your product is going to work effectively. You need to get that figured out at the model stage, before you have spent loads of time building hardware for it, for example, or putting stuff in packages and getting it onto store shelves. I think it is a big challenge.

(00:46:00):

On the other hand, a lot of the places where we are looking at deploying AI to the edge, first of all, it gives us another signal that we can use, in conjunction with existing signals, to potentially make better decisions.

(00:46:15):

For example, you mentioned your garage door opener. Maybe we could create a system that used a camera to look at whether your car was present, and gates whether the door opens based on whether it looks like your car is present, and whether the radio transmission is happening. That might allow us to reduce the requirements for the radio signal to be as strong, because we also have this other signal. So maybe it would start feeling more reliable for you. That is kind of a dumb example, but using these things together, multiplying the number of signals you have, can lead to more reliable products.

(00:47:03):

A good example of that, a case that I mentioned with the speech recognition stuff is, a big challenge with that is like, "How do you know whether somebody is talking to the digital assistant, versus just talking to someone else?" So one idea that has been thrown around, is putting a camera on the smart speaker, and looking at whether somebody is looking at the camera or not, when they are saying a command. By doing that, you are able to be a bit more sure that the person is talking to the device. So you can adjust all of your thresholds and calibration, to be more likely to pick things up and less likely to miss things. With edge AI, you can do that on the device, whereas previously you would not want to have streaming video up into the cloud.

EW (00:47:56):

That example makes sense, but my brain keeps saying, "That is just sensor fusion. We have been doing that for 20 years. It is nothing new."

DS (00:48:03):

Oh, totally. That is what I really would like to get across to people actually, is that this technology is not- It is something new in some ways, but more it is just another tool in your toolbox, that you can deploy alongside the tools that you are already using, in order to get better results in some situations.

(00:48:24):

So there is some cool fancy new stuff we can only do with this type of technology, but a lot of the time I think it just fits in as another tool, that an embedded engineer might deploy. That is the way I want people to see it. Not something fancy that requires all this extra, but actually just something useful that you can rely on in some situations.

CW (00:48:48):

I want to shift gears just a little bit, because I have got you here, and you are a machine learning expert. Elecia and I have both worked on machine learning projects for consulting. So we have a passing familiarity with most of the mathematical concepts, and how models work, and tests, and they are developed and stuff. But they have been mostly classification kinds of problem solving. Very straightforward, typical vision things.

(00:49:14):

As I have done more work with machine learning, as I have played with some of the things that come out of OpenAI, I am slowly turning into a Luddite. And last night I think I said I was going to burn all the computers in the house...

DS (00:49:30):

 <laugh>

EW (00:49:30):

As witches.

CW (00:49:31):

Because I got to play with...

EW (00:49:33):

ChatGPT.

CW (00:49:34):

ChatGPT-

EW (00:49:34):

I was definitely asking about.

CW (00:49:36):

Which is extremely fun to play with.

EW (00:49:38):

Extremely fun.

CW (00:49:39):

Totally wrong about 90% of the things it talks about.

DS (00:49:43):

 <laugh>

CW (00:49:43):

Dangerous, but also extremely fun to play with. So we are in this place where the public facing view of AI, as...

EW (00:49:54):

Like the data assistants.

CW (00:49:54):

Look at these fun tools, that are heading towards C-3PO, or can make art for us. How are you feeling about all this stuff?

EW (00:50:04):

<laugh> I forgot to ask him when he thought the singularity was going to happen. <laugh>

DS (00:50:08):

Oh, yeah. This is a whole thing. Honestly, my whole life I have been absolutely fascinated by artificial intelligence. But also the big questions around consciousness and sentience and experience, and that whole sort of stack there.

(00:50:28):

But I just do not think about that at work, because there is a lot of excitement in these kinds of innovations, especially around large language models and image generation and stuff. And it is really fun to play with. It is cool. It will have important applications. But what really needs to happen for it to actually make any kind of difference to anybody, is for there to be tooling that can be used to make it actually useful. I think it is going to take a while before we really get there.

(00:51:07):

Most of the time with AI, having worked on this type of stuff for my entire career, really. One of the companies I worked at about 12 years ago, we were doing artificially intelligent call center agents, which used NLP. It was all done using rule-based systems back then. We had the problem that our system was too smart, and it would sort of ad lib things.

EW (00:51:38):

 <laugh>

DS (00:51:39):

If you have got a call center agent, that is supposed to be like helping you book a theater ticket or get your delivery delivered in the right time slot, and then it starts just interjecting with unrelated topics that it thought that you were talking about, that is absolutely unacceptable from the point of view of the business that is trying to use this as a tool. So I think providing these kind of constraints is going to be the biggest challenge, in making this type of technology useful.

(00:52:05):

Until then, I am not really that worried about it, right? I think that in order for someone to use something for good or for bad, it needs to be effective. These types of things, we have not got to the point yet that they are actually effective for anything beyond playing. And there is nothing wrong with that. There are loads of cool things that exist only for entertainment, and for diverting humans from our own existential fear, for a brief time.

(00:52:38):

So if you can have a chat with the OpenAI models- There is actually an amazing thing called "AI Dungeon," which is-

CW (00:52:46):

Yes, seen it!

DS (00:52:48):

Oh, it is so crazy. It is like a roleplaying tech space adventure game, where it will generate the story as you go, and you can have conversations with the characters, and it is just mind-blowing. You could not put that in a corporate product interacting with the public, because it is going to say all kinds of weird unacceptable stuff. But if it is in a video game for adults to play with, that is fine.

EW (00:53:16):

I think I disagree with you. I remember my mom talking about computers and them getting into homes, and how so many people thought that it was going to come in through data analysis, like anybody does that at home. Or there was that giant computer that Neiman Marcus sold for recipes.

DS (00:53:41):

Oh, right. <laugh>

EW (00:53:43):

But the way that computers came into our homes was through video games, it was through play. With the image generators, I used it for playing for a while, but now I use it to create thumbnails for blog posts or for shows. Not because- these are not things I would pay an artist to do. Before, I would just use random pictures, but now they are slightly more relevant, and they are kind of fun. Maybe I have to go through a dozen versions before I find one I really like, but that process is fun.

(00:54:23):

And the ChatGPT that Chris and I have had a lot of fun with, anything having to do with screenplays for Star Wars and yogurt was just genius.

CW (00:54:36):

<laugh>

EW (00:54:37):

But I-

DS (00:54:39):

But you can also ask it to write a book report, and I could see a kid turning that in and getting a passing grade sometimes.

EW (00:54:46):

Well, more ethically-

DS (00:54:48):

No. Okay. Sorry. Yes.

CW (00:54:49):

<laugh>

EW (00:54:51):

Last week I had a colleague who was very frustrated with the client and we talked. It was useful in the end to him, because he was too close to it. It was easy for me to come at it from the outside, and say something calm that was exactly what he needed. He could have just told the GPTchat or the ChatGPT and it would- I mean, I asked it for a form letter for someone. I do not remember...

CW (00:55:16):

It was someone who is a non paying client.

EW (00:55:17):

A non paying client, which is a letter I hate to write. You would think I would just save it, but now I could actually put in a few extra details, and it would write me a letter that I did not- I am checking it. It is not, I am sending it. It is, I am using that as a template, a starting place for something more useful. And I am just, I guess that we have been having a lot of fun with it. And that is actually going to be today's quote at the end of the show is, is what I got from GPTchat, ChatGPT.

DS (00:55:53):

Oh, awesome.

EW (00:55:54):

But I do not agree. Like I had the problem earlier where if it is not reliable, it is not useful. But I think there may be another path for some of these other things that are not business focused. They are more human focused. Solving a problem I did not realize I had, and now you can solve it more easily, in a fun way.

DS (00:56:20):

So that is where I totally agree with you, and I think the key thing for me in all of those examples, is there is a human in the loop. These are being used as recreational experiences, or creative tools, or a helpful sounding board for something. That is amazing, and I think there is huge potential there.

(00:56:45):

But the thing that makes it useful in all those cases, is that you are gating what it does. So you are creating images to go on a blog post. You get to pick which image goes on there. Or you are helping yourself write a letter, and you get to look at the letter and edit it before you send it.

(00:57:08):

So it scales in one way, because lots of people can use that creative tool at the same time with very little costs, versus manufacturing a widget that does it. But it does not scale in the scary multiplicative way that worries people about the takeover of robots, for example. Because it is not automatically sending emails with no oversight. For every email that gets written with GPT-3, there is somebody reading that email and editing it first. Right now, if you decide not to go down that route and not edit the email first, you are going to end up causing a lot of problems for yourself.

EW (00:57:53):

That is on you <laugh>.

DS (00:57:53):

Exactly. So I think it is inevitable we are going to come up with tools for better controlling and mediating the outputs of these things, until they become useful in a scaling sort of way. But we are not there yet. And I have been working on this stuff for basically my entire career, and I have not seen we have made any progress whatsoever. <laugh> So I am not holding my breath.

(00:58:25):

That said, if I buy an R2-D2 to live in my house, and his job is just to poke around in my house and be my friend, I do not need him to be nice all the time and say the exact right thing and be on brand. I just need him to be my friend. And that is the same as my cats, which regularly attacks and bites me.

CW (00:58:49):

<laugh>

DS (00:58:52):

So as long as we are not putting harmful content out there, by building irresponsible products as organizations. You do not want the R2-D2 to be attacking your child, or whispering horrible things into your ear while you are sleeping. But as long as we can be in control of technology ourselves, I think it is really exciting. I would love to have a little robot friend that I can chat with. But maybe I would like to have one that I created, rather than one that somebody else built and is going to like secretly market things to me in the night.

EW (00:59:35):

That is worrisome. As somebody who likes to try out conversations before I have them, likes to get scripts for conversations, I imagine this will be very useful to me. I do not do well when things are super awkward. So I can tell it, "Play the role of someone who is angry." And then I can go back and forth. I can really see that being useful. I do not want to say for developing my communication skills, but for decreasing my worry with respect to some communication difficulties.

DS (01:00:18):

I love that idea. Like imagine you have got to give a talk, and you can give your talk to a system, which then asks pointed questions at the end, so that you can add the stuff that you missed.

EW (01:00:31):

Yes. I made it talk about curved crease origami, which is something I love to talk about, and nobody really will talk to me about it. So it was really fun to have it. We talked about topology and differential geometry. It gave me all kinds of ideas...

CW (01:00:49):

But some of them were probably garbage.

EW (01:00:52):

Oh. Oh. And the code gave me was total garbage.

CW (01:00:55):

<laugh>

DS (01:00:56):

See, that is the thing. That is reassuring.

CW (01:00:58):

All right. I am in agreement with both of you, but also disagreement.

EW (01:01:02):

<laugh>

CW (01:01:02):

Because this stuff worries me. I am not worried about- My worry, to be sure, is not AI becoming sentient, taking over the world robots. I am worried about the interaction of humans and this stuff, and where that leads. I am just not sure. It makes me weirdly uncomfortable while also enjoying it, which is a strange place to be. And maybe that is the place all of us should be with technology.

DS (01:01:25):

Yeah. I think people have felt the same way, really, at least for my entire life, about technology. Talking about these, "What are my favorite fictional robots?" They are all from sort of warning signs about technology movies, that were made so long ago that they look quaint now. And none of the fears have come to pass. So the problem is much worse fears, that we did not see have come to pass.

CW (01:01:50):

Yeah. That is what bugs me, is where is this going to go? And right now, ChatGPT, it is fun, but it is also a really good BS artist.

EW (01:01:59):

Oh my goodness.

CW (01:02:00):

It is a finely tuned BS machine. I see things when that thing talks, that I said when I was trying to explain to a teacher about a book I had not read.

EW (01:02:09):

<laugh>

CW (01:02:09):

It is like, "Oh, oh, I get it." And so that is very interesting to me. But also gives me some trepidation. So it is an interesting time <laugh>.

DS (01:02:20):

Yeah. If you think about it, these things are trained on a corpus of data from the internet. Which is basically a giant group of people talking about things they are not qualified to discuss.

CW (01:02:31):

<laugh>

EW (01:02:36):

<laugh>

DS (01:02:36):

<laugh>

EW (01:02:36):

Well, Daniel, it has been really great to have you. Do you have any thoughts you would like to leave us with?

DS (01:02:43):

Yeah, sure. So I think, if you are listening to this podcast, and you are interested in this kind of thing at all, I really recommend just getting your hands dirty and trying some stuff out. And with tools like Edge Impulse TM, or whatever kind of things you have at your disposal to play with, you would probably be just surprised with how much lower the barriers to entries have gotten. What I am really keen on is as many people as possible who actually know about things in the world, can use this type of technology to help solve problems. And the kind of problems that they were looking to solve already. So if you take anything away from what I have said here, it would be like, "Go and give this stuff a try. It is really fun and exciting and interesting. And there are some good resources out there at this point, which can help you build real stuff."

EW (01:03:41):

And Edge Impulse does have free tutorials. People can try this stuff out on your website.

DS (01:03:47):

Yeah, absolutely. And also the book, if you want to check that out, you can head over to O'Reilly. They have this online learning platform thing. If you sign up, I think you get 30 days free trial. So you can read the book for free, basically.

EW (01:04:06):

And then cancel.

DS (01:04:07):

And then <laugh>.

CW (01:04:08):

No. Do not tell them that.

(01:04:09):

And then buy a copy <laugh>.

(01:04:12):

Yep.

EW (01:04:12):

Our guest has been Daniel Situnayake, author of the new O'Reilly book, "AI at the Edge: Solving Real-World Problems with Embedded Machine Learning." Daniel is also the Head of Machine Learning at Edge Impulse.

CW (01:04:26):

Thanks Daniel. Fascinating, as always.

DS (01:04:29):

Thank you. Such a pleasure talking with you both.

EW (01:04:31):

Thank you to Christopher for producing and co-hosting, and thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm. And now I have a little thing to read you.

(01:04:44):

I asked ChatGPT to write a podcast transcript, where the podcast is hosted by a cricket, and asks lightning round questions to a machine learning experts.

Cricket (01:04:57):

Welcome to Chirping with the Experts. I am your host, a cricket, and today we have a special guest joining us, a machine learning expert. Let us jump right in with our lightning round questions. What excites you most about the field of machine learning and how does it relate to the interest of other crickets?

EW (01:05:14):

I am not going to fill in Daniel's part, because he did it already and it was very good. Let me ask a couple more of these lightning round questions.

Cricket (01:05:22):

Can you give us a brief explanation of what machine learning is, and how it works in a way that other crickets might understand?

CW (01:05:29):

<laugh>

EW (01:05:29):

And finally, the one that we did in fact ask Daniel.

Cricket (01:05:36):

What are some of the common applications of machine learning in the real world, and how might they relate to the interest of crickets and other animals?