497: Everyone Likes Tiny

Transcript from 497: Everyone Likes Tiny with Kwabena Agyeman, Christopher White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Kwabena Agyeman. We are going to talk about cameras and optimizing processors and neural networks and Helium instructions. I am sure it will all make sense eventually.

CW (00:00:29):

Hi, Kwabena. Welcome back.

KA (00:00:29):

Hey. Nice to see you guys again. Thanks for having me on.

EW (00:00:34):

Could you tell us about yourself, as if we did not record about a year ago?

CW (00:00:39):

<laugh>

KA (00:00:39):

Hi. I am Kwabena. I run a company called OpenMV. What we have been doing is really exploring computer vision on microcontrollers. We got our start back in 2013- Well actually a little bit before that. I am part of the original CMUcam crew, so I built the CMUcam4 back in 2011. That was basically doing color tracking on a microcontroller.

(00:01:02):

Since then, we founded a company called OpenMV, that is doing basically higher level algorithms. So, line tracking, QR code detection, AprilTags, face detection through Haar Cascades, and other various vision algorithms.

(00:01:23):

We put that on board pretty much the lowest end processors- Well, sorry, not the lowest end processors. The highest end microcontrollers, but lowest end CPUs, compared to a desktop CPU. So we got software that was meant for the desktop running on systems that had barely any resources, compared to the desktop.

(00:01:41):

Anyway, that was a hundred thousand OpenMV Cams ago, when we started. Since then, as I said, we have sold over a hundred thousand of these things into the world. People love them and use them for all kinds of embedded systems.

(00:01:55):

And we are launching a Kickstarter. We have already launched it right now, that is about the next gen, which is about a 200 to 600 times in performance. And really takes microcontrollers and what you can do if them and computer vision applications in the real world, to the next level.

(00:02:14):

Where you can really deploy systems into the world that are able to run on batteries for all day, and can actually process data using network accelerators on chip, and make mobile applications.

EW (00:02:32):

All right. Now I want to ask you a bunch of questions that have nothing to do with any of that.

KA (00:02:38):

That sounds good. That was a mouthful. I apologize. <laugh>

CW (00:02:43):

It is time for lightning round, where we will ask you short questions, and we want short answers. And if we are behaving ourselves, we will not ask you for additional details. Are you ready?

KA (00:02:53):

Yes.

EW (00:02:54):

What do you like to do for fun?

KA (00:02:57):

What do I like to do for fun? Right now, it is really just working out in the morning. I do like to walk around the Embarcadero in San Francisco. And try to get a run in the morning.

CW (00:03:09):

What is the coolest camera system that is not yours?

KA (00:03:13):

Coolest camera system that is not mine? Ah. Let us see. There is a lot of different stuff. I am really impressed by just- Did self-driving in a previous life. So it was awesome to see what you were able to do with that. Being able to rig up trucks and such, that could drive by themselves, back at my last job.

(00:03:36):

And then seeing Waymos and et cetera drive around downtown in San Francisco has been incredible. In fact, getting to take my 80-year-old parents in them, and thinking about how my dad grew up without electricity back in the forties and seeing him in a car that is driving around now, that is a crazy amount of change.

EW (00:04:00):

Specifics of your preferred Saturday morning drink, when you are indulging yourself?

KA (00:04:06):

My Saturday morning drink when I am indulging myself? These are good lightning round questions. Ah. Huh. Let us see. Well, I have had to cut down sugar, so I cannot say I have that many interesting things. However, I can tell you about beers I like. Which is, I like to go for those Old Rasputins-

CW (00:04:24):

Oh, yeah.

KA (00:04:25):

Really dark beer.

CW (00:04:25):

Yeah. I used to love those.

EW (00:04:26):

Yeahh.

KA (00:04:26):

They kind of do the job on doing you tipsy in one go-

CW (00:04:31):

<laugh>

KA (00:04:32):

And also filling you at the same time.

CW (00:04:36):

Since you like machine vision and cameras and things. Are you aware of the burgeoning smart telescope market?

KA (00:04:43):

No, I have not. But we had someone use an OpenMV Cam to actually do astrolabe- They made the camera follow the stars-

EW (00:04:54):

Mm-hmm.

CW (00:04:54):

Yes.

KA (00:04:54):

So they could do a long exposure.

CW (00:04:54):

There are a lot of really cool new low cost products, that have nice optics and good cameras and little computers. I think they are similar to OpenMV's compute power and stuff. But they can look at the stars and plate solve and figure out where it is looking, and do all the go-to stuff, plus imaging. It is a nice integrated place, that might be a place to explore at some point.

EW (00:05:21):

I need a little air horn or sound maker so that I- Because you are breaking lightning round rules.

CW (00:05:26):

A little bell.

EW (00:05:27):

A little bell. What do you wish you could tell the CMU4 team?

CW (00:05:32):

The CMU4 team? Well, that was me back in college. I would say, "I never expected I would end up doing this for so long." <laugh> It has been a journey now at this point.

EW (00:05:46):

Do you have any advice you would give your past self? Like, "Really do not put in those fix me laters, because you are going to actually have to fix them later."

KA (00:05:58):

The only thing I would say is, "Focus on the most important features, and not do everything and anything." Everything you build, you have to maintain and continue to fix and support.

EW (00:06:09):

<laugh>

KA (00:06:09):

So a smaller surface area is better.

CW (00:06:15):

Build nothing! If you could teach a college course, what would you want to teach?

KA (00:06:21):

I really think folks should learn assembly, and do some kind of real application in it.

EW (00:06:28):

Hmm.

KA (00:06:30):

I say that because if you remove all the abstractions and really get into how the processor works, you learn a lot about how to write performant code. Where to understand when the compiler is just not putting in the effort, as it should be. How to think about things.

(00:06:48):

It is important when- A lot of our problems in life come down to- Well, as engineers, we optimize things, we fix problems. And not doing that optimization in the beginning, or at least thinking about it, can end up costing you in various ways later, on having to redo things.

CW (00:07:11):

See also last week's episode, if you are interested in-

EW (00:07:14):

<laugh>

CW (00:07:16):

Learning about the basic building blocks of computing.

EW (00:07:20):

Okay. Enough lightning round.

CW (00:07:21):

Yeah.

EW (00:07:25):

You talked a little bit about what OpenMV is. What I heard was small camera systems that are intelligent. Is that right?

KA (00:07:38):

Yeah, yeah. We build a camera that is about one inch by one and a half inches, or 1.75 inches. So it is pretty tiny. A little bit smaller than a- Well, smaller than a Raspberry Pi, like half of the size or so.

(00:07:51):

What we do is, as I said, we find the highest end microcontrollers available on the market, and we make them run computer vision algorithms. We have been a little bit of a thought leader here. When we got started, no one else was doing this.

(00:08:06):

There was just an OpenCV thread. Well, a- What is it? Not Reddit. Stack Overflow. There was a Stack Overflow thread, where someone asking, "Can I run OpenCV on a microcontroller?" And the answer was, "No."

(00:08:19):

So we set out to change that. Note that no longer is the highest ranking thing on Google Search when you search for "computer vision on a microcontroller." But let me tell you, when we started, it stayed up there for quite a few years.

EW (00:08:33):

Actually, you just said you have 1.7 by 1.6?

KA (00:08:38):

Yeah, about 1.7 inches by 1.3 or so, I think. That is our standard cam size. So it is pretty tiny.

EW (00:08:46):

This is a board? This is not a full camera, this is something you put into another system?

KA (00:08:51):

Well, it is a microcontroller with a camera attached to it. So we basically have a camera module and a microcontroller board, and then I/O pins. So you can basically sense- You can get camera data in, detect what you want to do, and then toggle I/O pins based on that. So kind of everything you need in one system.

(00:09:11):

It was meant to be programmable, so you would not have to attach a separate processor to figure out what the data was. This is important, because a lot of folks struggle with interprocessor communication. So not having to do that, and just being able to have one system that can run your program, makes it a lot easier to build stuff.

EW (00:09:33):

When you say, "Programmable," I mean, MicroPython.

KA (00:09:36):

Yes.

EW (00:09:36):

Not just, "We have to learn all about the camera, and about the system, and start over from scratch," but just MicroPython.

KA (00:09:47):

Yes. So we have MicroPython running on board. MicroPython has been around for ten plus years now. It basically gives you an abstraction, where you can just write Python code, and be able to process images based on that.

(00:10:02):

So all the actual image processing algorithms and everything else, those are in C, and they are optimized. And then Python is just API layer, to invoke the function, to call a computer vision algorithm, and then return results.

(00:10:16):

A good example would be like, let us say you want to detect QR codes. There is a massive C library that does the QR code detection. You just give it a image object, which is also produced by a massive amount of C code, that is doing DMA accelerated image capture. Then that returns a list of Python objects, which are just the QR codes, and the value of each QR code in image. So it makes it quite simple to do things.

EW (00:10:45):

But from my perspective as a user, it is just take picture, get QR code information.

KA (00:10:54):

Yes, yes. We make it that simple.

EW (00:10:58):

<sigh> Is that not so nice?

CW (00:10:58):

<laugh>

EW (00:10:58):

But QR codes, okay, we know how to interpret those. But I can do my own AI on this. I can make an orange detector.

CW (00:11:12):

Or as I call it, "Machine learning."

KA (00:11:13):

Yes.

CW (00:11:13):

<laugh>

EW (00:11:13):

<laugh>

KA (00:11:13):

Well, "Hot dog, no hot dog."

EW (00:11:18):

Whereas I call it, "Regression."

CW (00:11:20):

Linear regression at scale.

EW (00:11:21):

Linear regression <laugh>.

CW (00:11:26):

Sorry. <laugh>

KA (00:11:28):

So what is new with OpenMV Cam, is that we actually launched two new systems. One called the "OpenMV Cam N6," and one called the "OpenMV AE3." These two systems are super unique and super powerful.

(00:11:44):

Microcontrollers now come with neural network processing units on board. These are huge because they offer an incredible amount of performance improvement.

(00:11:54):

Case in point, it used to be on our OpenMV Cam H7 Plus, which was one of our highest end models that we currently sell. If you wanted to run like a YOLO object detector, that detects people in a frame, or oranges or whatever object you train it for, that used to run at about 0.6 frames a second. And the system would draw about 180 milliamperes or so at five volts running.

(00:12:23):

With the new systems, we have got that down to 60 milliamperes while running. So 3x less power. And the performance goes from 0.6 frames a second to 30. So if you do the math there, it is about a 200x performance increase.

CW (00:12:37):

<laugh>

EW (00:12:37):

So crazy!

CW (00:12:37):

Yeah.

EW (00:12:39):

It is like, Moore's law has gone out of control.

CW (00:12:40):

Well, that is what happens when you put dedicated hardware on something.

KA (00:12:42):

Yes.

EW (00:12:43):

Well, the AE3 is the small one, the cheaper one. It is like 80 bucks.

KA (00:12:47):

Yes.

EW (00:12:49):

I made a point earlier about saying the size.

KA (00:12:52):

One inch by one inch.

EW (00:12:53):

But yes, this one is tiny!

KA (00:12:56):

The size of a quarter. We got everything down to fit into a quarter.

EW (00:13:02):

Your camera lens is the biggest part on that, is it not?

KA (00:13:06):

We wanted to make that smaller too. It ended up being that big though, because- For the audience, the OpenMV AE3- We got two models.

(00:13:14):

The N6 is a successor to our standard OpenMV Cam line. It has got tons of I/O pins. It has got a removable camera module, that you can put in multiple cameras, and other various things. And it has got all this hardware on board. Full featured.

(00:13:31):

Then we built a smaller one though, called the "OpenMV AE3," which is- Honestly, the way it came about is an interesting story, so I do want to go into that in a little bit. But it came out to be a really, really good idea, on making this such a small, tiny camera. It is so tiny you can put it in a light switch. I mean at one inch, by one inch, you can put it almost anywhere.

(00:13:54):

It features a neural network processor, a camera, a time of flight sensor, a microphone, an accelerometer, and a gyroscope. So five sensors all in one, in a one inch by one inch form factor, with a very powerful processor on board.

EW (00:14:11):

I am just boggled. It has got everything. It is so small, and I just...

CW (00:14:21):

You work with microcontrollers.

EW (00:14:22):

I know! And I work with small stuff. It is just this also has the camera. It has always been for me, once you add a camera, everything gets big again, because it is just-

CW (00:14:32):

Do you have your phone? <laugh>

(00:14:35):

I do want to ask about the cameras, just briefly, so we can get that out of the way, because I am camera focused these days. So. What kind of- What are these-

EW (00:14:46):

One megapixel.

CW (00:14:47):

One megapixel. And, what kind of field of view do the two options have?

KA (00:14:52):

Both of them are just- We try to aim for around 60 degrees field of view.

CW (00:14:55):

Okay.

KA (00:14:55):

It is pretty standard. Both of them though actually have removable lenses.

CW (00:14:59):

Okay.

KA (00:15:00):

Which is really, really nice. So for the OpenMV Cam N6, that is a standard M12 camera module.

CW (00:15:05):

Got it.

KA (00:15:07):

So you can put any lens you want on it. We have a partnership with a company called "PixArt Imaging," not Pixar Imaging. So PixArt- We actually met them at something called "tinyML," which got rebranded to the Edge- No.

EW (00:15:25):

Edge Impulse?

CW (00:15:27):

No, it is not Edge Impulse. Okay. There are too many Edge things. Anyway, it used to be called "tinyML." It is now the "Edge AI Foundation," I think.

EW (00:15:35):

Wait. Those are not related?

KA (00:15:39):

They are the same organization. It is a strange name.

EW (00:15:41):

No, no. The Edge Impulse and Edge AI are not related?

CW (00:15:44):

One is a company, one is a conference.

EW (00:15:45):

Well, yes. But-

KA (00:15:46):

One is a company and one is a consortium.

CW (00:15:47):

Oh, oh. I do not know.

EW (00:15:50):

But are they?

KA (00:15:51):

I mean, Edge Impulse is a member company of the Edge AI Foundation.

EW (00:15:55):

Oh.

CW (00:15:55):

That is too confusing.

KA (00:15:57):

And then you have the Edge AI Vision Alliance, which is a different thing.

CW (00:16:00):

No, no. We are not doing it. <laugh>

KA (00:16:03):

Then you have the Edge AI Hardware-

CW (00:16:04):

<laugh>

KA (00:16:04):

What is it? Edge AI Hardware conference, which is another thing. So there are too many Edges here. Makes it a little challenging.

CW (00:16:18):

<laugh>

KA (00:16:18):

Anyway. Yeah, what we were talking about. Cameras.

CW (00:16:24):

<laugh>

KA (00:16:28):

We have two different types. M12, which is a standard camera module. And we have a partnership with PixArt Imaging. They actually hooked us up with one megapixel color global shutter cameras, so we are making the standard on both systems.

(00:16:42):

So the N6 has a one megapixel color global shutter camera, and this can run at 120 frames a second at full resolution. So that is 1280 by 800.

(00:16:49):

And then the OpenMV AE3 has the same camera sensor, but we shrunk the lens down to an M8 lens, which is also removable. So you can put a wide angle lens or a zoom lens on there, if you want. That will be able to run at a similar frame rate.

(00:17:05):

It has less resources than the N6, so it cannot actually achieve the same speed and performance of the N6, but we are still able to process that camera at full res. So it could do maybe 30 frames a second at 1280 by 800.

(00:17:18):

But for most of our customers, we expect people to be at the VGA resolution, which is 640 by 400 or so, on this camera. That will give you about 120 frames a second.

EW (00:17:31):

Okay. I want to switch topics, because I have never used an NPU, and I do not know what it is. It is a neural processing unit. I got that. So it has some magical AI stuff that honestly seems like a fake.

CW (00:17:47):

It is a whole bunch of...

EW (00:17:49):

How is it different from a GPU?

CW (00:17:51):

It is all the parts of GPU without the graphics necessary. So it is all the linear algebra stuff.

KA (00:17:55):

Yeah.

EW (00:17:55):

So it is just an adder multiplier?

CW (00:17:57):

Pretty much.

EW (00:17:59):

How is it different from an ALU? <laugh>

CW (00:18:01):

It has got a billion of them.

EW (00:18:03):

It is a vectorized ALU?

CW (00:18:04):

Yeah.

KA (00:18:06):

Pretty much. This is actually what I wanted to talk about more. Not to try and do the sales pitch on this. I think folks will figure out things themselves. But I wanted to talk to some embedded engineers here about cool trends in the industry.

(00:18:18):

So NPUs, what are they? Yeah. Basically there was a unlock for a lot of companies, I think, that they realized, "Hey, we have got all these AI models people want to run now. And this is a way to actually use sensor data." Right?

(00:18:37):

You have had this explosion of the IoT hardware revolution happened. People were putting internet on microcontrollers, and connecting them to the cloud, and you would stream data to the cloud. But the challenge there is that that is a lot of data being streamed to the cloud, and then you have to do something with it.

(00:18:56):

You saw folks just making giant buckets of data that you never used it for anything. You might add an application to visualize the data, but you technically never actually put it to use. You just have buckets and buckets of recordings and timestamps. That is all very expensive to maintain, to have. And while it is nice per se, if it is not actionable, what good is that?

(00:19:23):

A lot of times, it is not quite clear how do you make an algorithm that actually- How do you use accelerometer data and gyroscope data directly? If you have seen the kind of filters and things you need to do with that classically, they are pretty complicated.

(00:19:36):

How do you make a step detector or a wrist shaking? There is not necessarily a closed form mathematical way to do that. You process this data using a model, where you capture what happens, how you move your hand and et cetera. Then you regress and train a neural network to do these things.

(00:20:01):

So that unlock has allowed us to make sensors that had very interesting data outputs, and turn those into things that could really detect real world situations. Of course, this becomes more and more complicated, the larger amount of data you have. So with a 1D sensor, it is not too bad. You can still run a algorithm on a CPU. But once you go to 2D, then it starts to become mathematically challenging. That is the best way to say it.

EW (00:20:33):

One and 2D here for an accelerometer are the X and Y channels? Or do you have a different dimension?

KA (00:20:41):

It would just be like- An accelerometer is just a linear time series, so to build a neural network model for that, you only need to process so many samples per second. Versus, of a image, you have to process the entire image every frame.

EW (00:20:59):

Okay. Yeah. Sorry, when you went from 1D to 2D and you were still talking about accelerometers, I was like, "Is that X, Y, Z? Or something else?"

KA (00:21:07):

Well, you also have 3D accelerometers, so-

EW (00:21:09):

Right, right.

KA (00:21:10):

It is really six channels, if you think about it.

EW (00:21:13):

And usually there are gyros, and sometimes there are magnetometers. So yes, you throw all the sensors on there, but those are still kind of 1D signals. As opposed to the camera, which is a 2D signal, because...

CW (00:21:26):

2D in large dimension too.

EW (00:21:29):

Right. Right.

KA (00:21:30):

Yeah. Yeah. Like an accelerometer. You might have the window of samples you are processing. Maybe that is, I do not know, several thousand at once per ML model, and a thousand different data points per time to run the model. That sounds like a lot, but not really, compared to several hundred thousand-

EW (00:21:53):

Images.

KA (00:21:53):

That would be if it were images.

CW (00:21:54):

Or millions, yes.

KA (00:21:55):

Yeah.

EW (00:21:57):

There is a reason there is an M in megapixel.

KA (00:22:00):

Yes. Yeah.

EW (00:22:01):

There are a lot of pixels.

KA (00:22:03):

Yes, absolutely. So there are a lot more. Anyway, enter the NPU. Basically processor vendors have been doing this for a while. Like, your MacBook has it. Where they have been putting neural network processors on systems. What these are, are basically giant multiply and accumulate arrays.

(00:22:22):

So if we look at something like the STM32N6, it will have 288 of these in parallel, running at a gigahertz. That is 288 times one gigahertz for how many multiply and accumulates it can do per clock.

EW (00:22:38):

Okay, my brain just broke. Let us break that down a little bit. Okay, 288 parallel add multiply units.

KA (00:22:49):

Yes.

EW (00:22:49):

So any single step is going to be 288, but I can do a whole heck of a lot in one second.

CW (00:22:56):

Yes. So that is 288 billion, multiply and accumulates per clock.

(00:23:01):

Then it also features a few other things. Like there is an operation called "rectified linear unit." That is also counted as a op. It is basically a max operation, so that is done in hardware. So then you go from 288 to like 500, effectively. There are a few other things they can do in hardware for you also.

(00:23:21):

All that combined, it is equivalent to about 600 billion operations per clock cycle, for basically running any ML model you want.

EW (00:23:35):

But there is a problem here. I do not know that I believe in AI. Or ML. It seems like-

CW (00:23:44):

Nah. Let us not confuse two things.

EW (00:23:48):

<laugh> The philosophical and the technical?

CW (00:23:50):

No, I was going to have a disclaimer about we are not talking about LLMs.

EW (00:23:56):

Ah. Okay.

CW (00:23:56):

Which is what I traditionally am up in arms about. This is-

EW (00:24:00):

Because this is a copyright issue and morals.

CW (00:24:01):

This is image classification. No, not- Anyway. Everybody knows-

EW (00:24:05):

<laugh>

CW (00:24:05):

My opinion on AI quo LLMs.

EW (00:24:09):

And if you do not, it will be just Chris and me next week, so we can find out.

CW (00:24:12):

I work on- One of my clients works on machine learning stuff. I work on machine learning stuff. I have for years. It is very useful for these kinds of tasks of classification and-

EW (00:24:21):

And self-driving.

CW (00:24:22):

Detection. It is not useful for self-driving, because that does not seem to work very well.

EW (00:24:26):

It worked fine when I did it.

CW (00:24:32):

<laugh> Yes. Your truck on a dirt road, following a different truck at 20 miles an hour, was- It worked. Yes. Anyway. Anyway. What was your point?

EW (00:24:43):

<laugh>

KA (00:24:43):

Well-

CW (00:24:46):

Her point. Your point is probably better than both of our points <laugh>.

EW (00:24:50):

My point was, we are seeing a lot of funding, a lot of things that go into processors that are called "neural processing units," like they are supposed to be used for neural networks. And yet we are also seeing some difficulties with the whole ML and AI in practice.

(00:25:10):

I do not think those difficulties are actually related to what you are working on. But do you see them reflected in either your customers or your funders? Or people just talking to you and saying, "Why are you doing this? Because it is not really working out as well as people say it is."

KA (00:25:27):

Well, I think there is some difference there. One, we are using the branding "AI," because that is what everyone uses nowadays.

EW (00:25:35):

Oh, yeah.

CW (00:25:35):

Yeah. You have to.

KA (00:25:36):

Just to be clear, I would prefer to call it "ML," but that is old school now. Everyone is using AI, so we had to change the terminology just to make sure we are keeping up. We are just doing CNN accelerators. These are probably pre ChatGPT, really.

EW (00:25:54):

Convolutional neural networks.

KA (00:25:57):

Yeah. Yeah. What they are doing is- Most of the object detector models, for example. Let us say you want to do something like human body pose, facial landmarks, figure out the digits on where your hand- Your hand detection and figuring out how your fingers are, your finger joints, things like that.

(00:26:18):

These are all built off these convolutional neural network architectures, that basically- Imagine small image patches that are being convolved of the image. So imagine a three by three activation pattern, and that gets slid over the image and produces a new image. And then you are doing that in parallel, like- Um. Oh. It is way too hard to describe how conv decks worked to-

EW (00:26:44):

Let me try.

KA (00:26:46):

At a high level.

EW (00:26:47):

You have a little image that you remember from some other time. Maybe it is a dog's face. You slide it over the image you have here in front of you, and you say, "Does this match? Does this match? Does this match?"

CW (00:27:01):

Or, "How well does it match?"

EW (00:27:02):

"How well does it match?" And then at some point if you hit a dog's face, it matches really well. And now you can say, "Oh, this is dog."

(00:27:09):

Now you do that with eight billion other things you have remembered through the neural network, and you can say, "Well, this is a face." Or "This is where I have best highlighted a cat face, and this is where it best highlighted a dog face, and there is a 30% chance it is one or the other."

(00:27:27):

The convolving is about saying, "So I have this thing, and I have what is in front of me. I want to see if this thing that I remember, matches what is in front of me." There are lots of ways to do it. You can have different sizes of your remembered thing, because your dog face might be big or smaller in your picture.

CW (00:27:50):

If you look inside the networks after they have learned, and interrogate the layers, you can see what it is learning. It will learn to make edge detectors, and lots of even more-

EW (00:28:01):

And patterns and stripes.

CW (00:28:02):

More fine features than just a face. It might just be, "Okay, there is a corner." A corner might mean a nose, but it might mean this, and it combines all of that. It gets very sophisticated in the kinds of things that it looks for. But you can look inside a convolutional neural network that has been trained.

EW (00:28:16):

It is fascinating.

CW (00:28:16):

And kind of get a sense for what it is doing.

KA (00:28:19):

Yeah. This is the way things are going nowadays, for how people do these higher level algorithms. Honestly, you really could not solve it before, without using these kind of techniques. Just because most of the world is with these weird amorphous problems, where there is no closed mathematical form to describe like, "What is a face?"

(00:28:40):

You actually have to build these neural networks that are able to solve these problems. It is funny to say it, because this started blowing up ten plus years ago now. So it has actually been here for a long time. It is not necessarily even new anymore.

EW (00:28:57):

Definitely not. So when I fuss about, "Is AI still a thing? Or is it going to be a thing?" It is not this class of problem. This class of problem, the machine learning parts, are really well studied and very effective.

CW (00:29:12):

And, with these- This is the last thing I will say on this. You do get with the output, a confidence level.

EW (00:29:17):

Right.

CW (00:29:18):

It says, "This is a bird," and it says, "I am pretty sure. 75%. Or 85%." You can use those in your post-processing to say, "Well, what action should I take, based on this confidence?" As opposed to certain other kinds of-

EW (00:29:31):

<laugh>

CW (00:29:33):

<clears throat> AI things that do not do that.

KA (00:29:35):

Yeah. So the ChatGPT-like stuff, that is a whole different ball game. Let us come back next year maybe and talk about that.

CW (00:29:42):

Yes. <laugh> Yeah. Okay.

EW (00:29:44):

You mentioned "YOLO," which is an algorithm. Could you tell us basically the 30 second version of what YOLO is?

KA (00:29:54):

Yeah. Yeah. So YOLO is "You Only Look Once."

(00:29:56):

So when you are trying to do object detection, the previous way you did this was that you would slide that picture of a dog over the entire image, checking every single patch at the same time. Well, patch one after another. You can imagine that is really computationally expensive, and does not work that well. And takes- Literally, the algorithm would run for seconds, to determine if something was there.

(00:30:19):

So if you only look once, it is able to do a single snapshot. It runs the model on the image once. It outputs all the bounding boxes, that surround all the objects that it was trained to find. That is why it is called "You Only Look Once."

(00:30:41):

There is another one called "Single Shot Detector," SSD. Before these were developed, yes, the way that you would find multiple things in an image, would be that you would slide your classifier. Basically a neural network that could detect if a certain patch of the image was one thing or other. You would just slide that over the image, at every possible scale and rotation, checking every single position.

(00:31:07):

That would be, "Hey, the algorithm could run on your server class machine." It would still take ten seconds or so to return a result of, "These are all the detections." So you can imagine on a microcontroller that would be, you would run the algorithm, come back a couple days later, and you would get the results.

EW (00:31:24):

But you have YOLO running pretty fast on the new Kickstarter processors. Did you code that yourself?

KA (00:31:32):

No. No. It is thanks to these neural network processors. The way they work, actually- The easiest one to describe is the one on the AE3, which is the Arm Ethos NPU. That one uses a library called "TensorFlow Lite for Microcontrollers." It is a open source library that is available.

(00:31:53):

They have a plugin for different accelerators. So basically if you do not have a Ethos NPU, it can do it on the CPU. Just a huge difference in performance. If you have the Ethos NPU available, then the library will offload computation to it.

(00:32:10):

You just give it a TensorFlow Lite file, which basically represents the network that was trained. As long as that has been quantized to 8-bit integers for all the model weights, it just runs.

(00:32:24):

The NPU is quite cool, in that it can actually execute the model in place. So you can place the model on your flash, for example, and the NPU- You just give it a memory area to work with, called the "tensor arena," for its partial outputs when it is working on things. It will run your model off flash, execute it in place, and produce the result, and then spit out the output. It goes super fast.

(00:32:53):

We were blown away by the speed difference. For small models, for example, it is so crazy fast that it basically gets it done instantly. An example would be there is a model called "FOMO," Faster Objects, More Objects-

EW (00:33:06):

<laugh>

KA (00:33:07):

Developed by Edge Impulse. Yeah, the name is on the nose. That object went from running at about 12 to 20 frames a second on our current OpenMV Cams, to 1200.

CW (00:33:23):

I was going to ask how you- Models can be fairly sizable, and with microcontrollers we are really not talking about many megabytes of RAM. Yeah. You answered the question. So it gets stored in flash, and executed directly out of flash.

EW (00:33:35):

And 8-bit. That is an optimization that is happening all over, is that it turns out you do not need your weights to be super bit heavy. You can have lower resolution weights and still get most of the goodness of your models.

KA (00:33:51):

Yeah. Yeah, you can. This also does amazing things for memory bus bandwidth. Because if you imagine if you are moving floating point numbers, like a double or a float, that is the four to eight times more data you need to process things. So if 8-bit, yeah, it is just a lot snappier trying to get memory from one place to another.

CW (00:34:14):

Quantizing to 16-bit is not too difficult. Quantizing to 8-bit, which I have tried a few times for various things, there are some steps required there that are a little above and beyond just saying, "Here is my model. Please change it to 8-bit for me." Right? You have to-

KA (00:34:29):

Yeah.

CW (00:34:30):

Yeah.

KA (00:34:31):

Typically what you need is to actually do something called- You want to do quantization aware training, where when the model is created, whatever toolchain you are using to do that or toolset, those actually need to know that you are quantizing, that you are going to be doing that.

(00:34:47):

Otherwise, when it tries to do it, it will just result in the network being broken, basically. You cannot just quantize everything, without any idea of what data is flowing through it. Otherwise it will not work out so well.

EW (00:35:02):

When we say, "8-bits," we do not mean zero to 255. We do in some cases, but there are actually 8-bit floating point, and that is part of this quantization issue.

CW (00:35:13):

I think these are integers.

(00:35:15):

No, it is not floating point. It is just scaling and offset. So each layer basically has a scale and offset, that is applied globally to all values in that layer. There are some more complex things folks are trying to do. Like making it so that is even further refined, where you have different parts of a layer being broken up into separate quantization steps.

(00:35:36):

But so far right now, for TensorFlow Lite for Microcontrollers, it is just each layer has its own quantization. It is not more fine-grained than that right now.

(00:35:46):

Okay. I was not aware it was per layer like that. That is cool.

EW (00:35:49):

I did not realize that the NPUs basically take TensorFlow Lite. That gives the power to a lot of people who are focused more on TensorFlow, and creating models and training them. So what did you have to do?

KA (00:36:08):

Well. Let us say, "It is not that easy."

EW (00:36:10):

<laugh>

CW (00:36:10):

<laugh> Really? Working with TensorFlow is not that easy?

EW (00:36:14):

Shocking! <laugh>

KA (00:36:14):

Yeah, it is not that easy. Let me say it like that.

CW (00:36:19):

And for no good reason! I think. But, anyway.

KA (00:36:22):

Well, what happens is actually different manufacturers have different ways of doing this.

CW (00:36:29):

Yes.

KA (00:36:29):

For ST, for example, they do not use TensorFlow Lite for Microcontrollers. They have their own separate AI system, called the "ST ART AI Accelerator." Their NPU is more powerful, but totally different software library and package. None of the code is applicable.

EW (00:36:47):

Oophh.

KA (00:36:48):

You need to use their toolchain to build models-

EW (00:36:51):

No ST.

KA (00:36:51):

And their libraries to run.

EW (00:36:53):

Let us talk about this ST. It is not a good idea.

KA (00:36:57):

Ah. Well, the reason they did that is because they wanted to have more control over it. It totally makes sense.

EW (00:37:04):

It lets them optimize in ways their processor is optimized for. Instead of with TensorFlow Lite, where you have to optimize the things everybody is doing, instead of what you are specifically- Never mind. Sorry.

CW (00:37:17):

Yeah. Yeah. I think that is the reason why they went for it. Another reason is- This is a weird architecture divergence. But with TensorFlow Lite, you have basically a standard library runtime that includes a certain number of operations. So for us with the OpenMV Cam, we have to enable all of them, even if you do not use them.

(00:37:36):

So ST was trying to be more conscious to their customers and say, "Okay, for customers who have less RAM or flash available on their chips, we want to then instead compile the network into a shared library file, basically. That just has the absolute minimum amount of stuff needed. That way it is executed on your system. If you do not have a NPU, it works on the processor. If you do, then it goes faster."

(00:38:05):

The only challenge with that, is that means your entire firmware is linked against this model. It is not like a replaceable piece of software anymore. So it is optimum from a size standpoint. But it means that being able to swap out models, without having to reflash the entire firmware, becomes a challenge.

(00:38:23):

For us with MicroPython, one of our goals is so that you do not have to constantly update the firmware, to change any little piece of the system. So it was a lot easier to get the integration done for the Arm Ethos NPU, because it was built that way, where the library is fixed and the model is fungible.

(00:38:44):

Can you run multiple models? Like if I- You said the OpenMV had a camera, as well as some other sensors. Can I have a model running the vision part, and one looking at the sensors for gestures and things?

KA (00:38:56):

Yeah.

CW (00:38:57):

Okay.

KA (00:38:57):

Yeah. Yeah. That is part of the cool feature. With OpenMV Cam AE3, for example, you can actually have- We actually have two cores on it, so I wanted to get into that in a little bit.

(00:39:08):

But you can basically, once you finish running the model, you can have multiple models loaded into memory, and you just call inference and pass them the data buffer, whatever you want. Obviously only one of them can be running at a time, but you can switch between one or another and have all of them loaded into RAM. So if you wanted to have five or six models running and doing what you want, you can.

(00:39:30):

Again, the weights are stored in flash, just the activation buffers are in RAM. So as long as the activation buffers are not too big, there is really not necessarily any limit to this. It is just however much RAM is available on the system.

EW (00:39:45):

We are going to come back to some of these more technical details in a second. But if I got the AE3 from your Kickstarter, which just launched and you will fulfill eventually- But I got one. I got one today. What is the best way to start? Do I go to tinyml.com, which now will redirect me somewhere else? How do we start?

CW (00:40:16):

Yeah, we have actually thought about that for you. There are two things. One, we built into OpenMV IDE, our software, a model zoo. This basically means you are going to be able to have all the fun easy-to-use models, like human body pose, face detection, facial landmarks, people detection. All of that. There are well-trained models for that. Those are going to be things you can deploy immediately. We will have tutorials for that.

(00:40:41):

And then for training your own models though, we are actually in partnership with Edge Impulse. And another company called "Roboflow," which is a big leader in training AI models. With both of their support, they actually allow you to make customized models using data.

(00:40:58):

One of the awesome things that Roboflow does, for example, and Edge Impulse, is that they do automatic labeling for you, in the cloud, using large language models.

(00:41:09):

There are these ones called "vision language models" that are kind of as smart as ChatGPT, but for vision. So you can just say, "Draw a bounding box around all people in the image," and it will just do that. You do not need to do it yourself. It will find most of the people in a image, and draw bounding boxes around them. Or you can say, "Oranges," or, "Apples," or whatever you are looking for.

(00:41:30):

Then using that model, that basically helps you create- You just take the raw data you have, ask the vision language model to mark it up with whatever annotations are required to produce a dataset, that can then be trained to build one of these more purpose made models, that would run on the camera. It is extracting the knowledge of a smarter AI model, and then putting it into a smaller one, that can run on board.

EW (00:42:00):

I am familiar with Edge Impulse, but Roboflow is new to me. Have they been around for very long? Are they robotics focused? Or is it just now it is everything, machine learning and vision for them?

KA (00:42:13):

Roboflow is just focused on machine vision.

EW (00:42:15):

Okay.

KA (00:42:15):

They are actually quite big in the more desktop ML space. Like NVIDEA Jetson folks, and all the developers who are at a higher level than microcontrollers, that is where they have been playing.

(00:42:29):

But they are one of the leaders in the industry for doing this, and making it easy for folks to run models. We are working with them to help bring these people into the market, to help make it so that you can train a model easily. What they do is they will provide you with a way to train a YOLO model, for example, that can detect objects and the object can be anything.

(00:42:52):

As I mentioned, they will help bootstrap that. So you do not even have to draw bounding boxes yourself, or label your data. You just go out and collect pictures of whatever you want, put that into the system, ask the vision language model to label everything, and then you can train your own mode, quick and easy.

EW (00:43:09):

Is it really?

KA (00:43:11):

Well, the deployment might be a challenge. We got to work through those issues, but the hope is, it will be by the time we ship.

EW (00:43:18):

Last time we talked, we hinted at the stuff with the Helium SIMD. Actually, I should start that, because I am not going to assume other people have heard that episode. What is the Helium SIMD, and why is it important?

CW (00:43:36):

Especially since you have this NPU? That is what I was going to ask. Because you mentioned it. Yeah.

KA (00:43:40):

Well, there are two big changes that we are seeing actually, on these new microcontrollers. I think the first thing to mention is yes, they all have neural network processing units on board. These offer literally 100x performance speedups. People should take aware of that. I do not know where you get 100x performance speedups out of the box on things. That is two orders of magnitude, is pretty big deal.

(00:44:04):

But even more so, Arm also added the Cortex-M55 processor for microcontrollers, which feature vector extensions. So last year we were just getting into this, and we were thinking about what does it look like to program with vector extensions. I had not done any programming yet. I was just talking about the future of what it could be.

(00:44:22):

But now with launching the OpenMV Cam AE3 and N6, I spent a lot of time writing code in Helium. In particular, the OpenMV Cam AE3 is actually somewhat of a pure microcontroller. It does not have any vision acceleration processing units. There is nothing in there specifically to make it easier to process camera data.

(00:44:50):

It has MIPI CSI, which allows you to receive camera data. It also has a parallel camera bus. But there is no- Normally, processors nowadays have something called a "image signal processor," that will do things called "image debayering." It will do scaling on the image, color correction. A bunch of different math operations that have to be done per pixel. So it ends up being an incredible amount of stuff the CPU would have to do.

(00:45:15):

That does not actually exist on the OpenMV Cam AE3 in hardware. The N6 from ST has that piece of logic, so it is able to have a hundred percent hardware offload to bring an image in. That is why it is able to achieve a higher performance from the camera, because you do not have to do any CPU work for that.

(00:45:32):

But what we did for the OpenMV Cam AE3, is we actually managed to process the image entirely on chip, using the CPU. So the camera itself outputs this thing called a "Bayer image," which is basically each row is red, green, red, green, red, green, and then the next row is green, blue, green blue, green, blue. Then it alternates back and forth.

(00:45:55):

So to get any pixel- If you want to get the color of any particular pixel location, you have to look at pixels to the left, right, up, down, then diagonal from it. And then compute. And that changes. That pattern changes every other pixel, because depending on the location you are at, you are looking at different color channels, to grab the value of the pixel.

(00:46:16):

You have to compute that per pixel, to figure out what the RGB color is at every pixel location. If you just think about what I just said in your head, it is a lot of CPU just to even turn a Bayer image into a regular RGB image, that you can even use to process and do anything with.

CW (00:46:34):

Basically every digital camera sensor works this way. The color resolution is far lower than the absolute pixel resolution, because of these filters. Because a camera does not know about color, it is just measuring light intensity. So to get color, you have to put filters in front of it and then, yeah, do this kind of math.

KA (00:46:54):

Yeah. So what we had to do is we are debayering the entire image on OpenMV Cam AE3, in software using Helium. What we were able to achieve is- This is a 400 megahertz CPU. We are able to do about 120 frames a second, at the VGA image resolution. Which is about 0.3 megapixels at, yeah, 120 frames a second.

(00:47:15):

So what is that? What is the math on that? 0.3 megapixels times 120? Yeah, about 36 million pixels a second with the processor. What is crazy here though is that if you try to do that on a normal regular Arm processor-

CW (00:47:33):

Which I have. <laugh>

EW (00:47:33):

<laugh>

KA (00:47:35):

Yeah. For M7, the previous generation, you would not get that. There is about a- Helium offers probably around 16 times the performance increase, realistically. That is huge.

(00:47:45):

Again, it takes an algorithm that would be totally not workable. Like, you would get maybe 20 frames a second, 30 at the best. Now we are at a hundred. Well, yeah, like 20 or 30 maybe at the best, and now we are at 120 frames a second. Right.

EW (00:48:02):

That is crazy. <laugh> Sorry.

KA (00:48:06):

Yeah. It is good. <laugh>

EW (00:48:09):

I am used to thinking about if you are going to do image stuff, if you are going to get complicated or you are not quite sure what you need to do, you probably need to go to the NVIDIA Orin or-

CW (00:48:23):

Oh! Well.

EW (00:48:23):

The TX2, or whatever. Usually I would first go to NVIDIA's website and see whatever their processor is, and what dev kit I could get there. Which then involves Linux and all of that. When did the microcontrollers catch up? Did they catch up? Or are they just one step behind, and I am three steps behind?

KA (00:48:47):

Well, they are still one step behind. If you look at NVIDIA Orin or et cetera, those have 100 TOPS. And so with the OpenMV Cam-

EW (00:48:55):

A hundred what?

KA (00:48:56):

A hundred Tera OPS. So a hundred trillion operations a second.

EW (00:49:00):

Thank you.

KA (00:49:00):

So microcontrollers are now just starting to hit up to one Tera op[s]. There is still a 100x performance difference there.

(00:49:10):

But, what is important to understand is with the current performance of these things, they are good enough to do useful applications. And what is valuable is that they can run on batteries. That is the big unlock here. So if you look at the-

CW (00:49:23):

My Orin can run on batteries. They just weigh-

EW (00:49:23):

<laugh>

CW (00:49:23):

Ten pounds.

KA (00:49:27):

<laugh>

CW (00:49:31):

And are carried on a six foot wingspan drone. I do not see the problem <laugh>.

KA (00:49:36):

Yeah, yeah. As long as you have a big vehicle, it is no issue. Right? But that is the challenge here, is that you need to have a big vehicle for that.

(00:49:46):

So what we are looking at is like, okay, with the OpenMV Cam AE3, for example, it is going to draw 60 milliamps. This is like, I cannot go over this number. It draws 60 milliamps of power. At full power!

CW (00:49:59):

Yeah. That is crazy. That is amazing.

KA (00:50:00):

At full power.

CW (00:50:01):

In terms of operations per watt, that is way beyond what Orin or anything does.

KA (00:50:06):

Well, think about it like this, a Raspberry Pi 5, without a external AI accelerator, that gives you a hundred giga OPS, if you peg every core at a hundred percent.

CW (00:50:17):

Okay.

KA (00:50:17):

And this thing is able to give you double that, with that much less power consumption. These AI accelerators are incredible in the performance. Again, 100x performance increase is nothing to laugh at. It is a pretty big deal. But we are looking at 60 milliamps power draw, full bore. So 0.25 watts or so.

(00:50:38):

We got it down to about 2.5 milliwatts at deep sleep. But there is some software optimization we still need to do, because we think we can get it below one milliwatt while it is sleeping.

(00:50:50):

Anyway, the reason to mention that though is that, okay, two AA batteries, that is one day of battery life at 60 milliamps. Two AA batteries. So you could have the camera just running all the time inferencing. Like if you want to do one of those chest mounted cameras, like the Humane Ai Pin, for example. This little thing could do that, and give you all day battery life, on, again, two Energizer alkaline AA batteries. Nothing particularly special. Cost a dollar each.

(00:51:22):

So if you put a little bit more effort in, and you actually have maybe a $30 battery or something, you have more than a couple days of battery life. Then if you think about, "Okay, maybe I can put it into deep sleep mode, where it is waiting on some event to happen. Like maybe every ten minutes it turns on, and takes a picture and processes that."

(00:51:39):

Now you have something that you can build an application out of. Like let us say you want to detect when people are throwing garbage in your recycling can. You could put this camera in there, and every ten minutes, or every hour-

CW (00:51:52):

Wait, wait, wait. Why would I want an image of me making a mistake all the time?

KA (00:51:55):

<laugh>

EW (00:51:55):

<laugh>

CW (00:51:55):

Sorry <laugh>.

KA (00:52:00):

I live in a condo, so we have a shared place.

CW (00:52:02):

Oh. Okay. <laugh>

KA (00:52:02):

It is a problem constantly with people complaining, because in San Francisco at least, Recology does hand out fines for you violating that.

CW (00:52:11):

Yes.

EW (00:52:13):

I can make a little beepy sound, or-

CW (00:52:15):

Yeah. That is why-

EW (00:52:17):

Or change the chute, or

CW (00:52:20):

Like the time of flight sensor. If I wanted to make a smart bird feeder, I could wait until a bird was actually detected physically without using the camera first, before starting to take an image.

EW (00:52:31):

Yeah, you get an interrupt from your accelerometer, that the thing has-

CW (00:52:34):

Landed.

EW (00:52:37):

Got a bird.

CW (00:52:38):

Yeah, bird detected.

KA (00:52:38):

Oh yeah. No, very easily. And this processor can be in a low power state, waiting on that. The second it happens, yeah, you wake up and you proceed to take an image, and check to see- Let us say you want to know what birds are appearing in your bird feeder. Well, when birds touch the bird feeder, they cause it the shake, right? So you have got a nice acceleration event.

(00:52:58):

So you could have the accelerometer in a really low performance state, just waiting for when it sees any motion detection. When that happens, then the camera turns on, takes a picture, runs inference. Then if it sees a bird there, it could then connect to your Wi-Fi, or-

(00:53:14):

We are also going to be offering a cellular shield for the OpenMV Cam N6 and AE3. It could connect to the cellular system and send a message, like a text message to you, and then go back to sleep. Maybe it could text message the entire image too, if you wanted. So these things are going to be doable.

(00:53:33):

The best feature here is, again, it could last on batteries. Then you could also use a solar panel, for example, and have that attached. Then the battery life is really, at that point, infinite.

EW (00:53:44):

The bird feeders do exist. Do they already have OpenMV cameras in them?

KA (00:53:50):

No. They are using something else right now. But I do not imagine it actually does much processing on board though, to determine what bird type or et cetera. It would probably just get a image.

CW (00:54:01):

Cloud stuff.

EW (00:54:02):

Which means you are sending your data to the cloud. Where is someone else's computer in your yard.

CW (00:54:09):

They might be spying on your bird.

EW (00:54:10):

Well.

CW (00:54:10):

I know, I know. <laugh>

KA (00:54:14):

An example would be trail cams.

CW (00:54:15):

Yeah.

EW (00:54:15):

Yeahh!

CW (00:54:16):

So trail cameras right now, a big complaint of them currently, is that they take a lot of images of nothing. Because anytime there is any motion or whatever, they turn on, snap a image. So you might have-

EW (00:54:29):

Wind.

KA (00:54:30):

Yeah.

EW (00:54:31):

Wind is such a problem with such cameras.

CW (00:54:33):

Especially if they are near trees, which they are generally for trails. Yeah.

KA (00:54:37):

Yeah. Trail cameras just they are known right now, to take tons and tons of images of nothing. You will have limited SD card space on these, right? So if your trail camera is taking tons of images of nothing, then when it actually comes time for it to take a picture of something useful, it might have run out of disc space. It might have used up all of its batteries.

(00:54:57):

Or just if you want to go and actually do something with that data, now you have thousands of images you have to look through, trying to find the one that actually has the picture of the animal you are looking for.

(00:55:08):

So yeah, having this intelligence at the edge- Well, there we go again, using the word "edge." Having this intelligence at these devices, really allows you to make them much more easy to use really, if you think about it. Because now the system is actually doing the job you want, versus capturing a lot of unrelated things you do not care about.

CW (00:55:30):

There is a privacy argument there too, because the more we-

EW (00:55:32):

That is where I was headed. Yeah.

CW (00:55:33):

The more you push the intelligence to the edge, the less you have to move stuff to the cloud, where it is vulnerable. For certain applications, you might have an entirely closed system, that is totally inaccessible to anyone outside without physical access. Which is not possible, if you are shipping stuff up to a cloud server, to run on a GPU.

EW (00:55:52):

Some modem bandwidth is not that expensive anymore, but it is still not, "I want to send videos all the time." It is sending a text message that said, "I saw this gecko you have been waiting for," would be way more useful.

CW (00:56:06):

<laugh>

KA (00:56:08):

Yeah, no, absolutely. Because otherwise, right now it would be, "Here is a picture of a gecko you were waiting for," and it is a picture of wind. Repeatedly over and over again. So yeah, no, it is going to be fun, what you can do with these smart systems, and what they are going to be able to do. Being able to run in these low power situations is important.

(00:56:27):

I mentioned earlier, I wanted to touch on how we got to the OpenMV Cam AE3, for example, being so tiny. Why did we create a one inch by one inch camera?

EW (00:56:37):

Yeahh! So tiny!

KA (00:56:39):

Yeah. Well. Honestly, I did not think this direction in the company would be something we were going to support. I wanted to keep the camera at the normal OpenMV Cam size. But the actual reason we ran into this, and we made the OpenMV Cam AE3, is because the Alif chip was actually super hard to use at the beginning of last year. We were-

EW (00:57:00):

I remember some complaints around here too.

KA (00:57:03):

In a mission failure kind of mode with it, to be honest.

EW (00:57:08):

Yeah. There were a lot of promises for the Alif.

KA (00:57:11):

Yeah, there were a lot of promises. There were bugs in the chip. I know that if you listen to our last episode, you will have... <laugh>

CW (00:57:21):

Oh, wait. I am free to talk about this now. <laugh>

KA (00:57:23):

Yeah, yeah. There were some issues. In particular USB was broken. I/O pins did weird things, like you had to set the I/O pins, the I2C bus to push-pull for it to work. Which if you know about I2C, it should be open-drain. Stuff like that. Repeated stops on the I2C bus did not work.

CW (00:57:46):

Did you encounter- I certainly did not.

EW (00:57:48):

<laugh>

CW (00:57:48):

Did you encounter any power issues? Brownouts, flash corruption, kinds of things?

KA (00:57:55):

No. Luckily we did not encounter those.

CW (00:57:56):

Oh. Okay.

KA (00:57:56):

But we had issues with the camera driver. When you put it into continuous frame capture mode, it just overwrites all memory with pictures of frames. Because it never resets its pointers, when it is capturing images. It just keeps going, and incrementing, forever.

EW (00:58:12):

Okay. So yes. The Alif. But you got it working.

KA (00:58:17):

We got it working though. And now it is the best thing ever. It is crazy how your whole interpretation of these microcontrollers changes, once you get past all of the, "Oh my God- We are about to- This is the worst idea ever- Bugs."

(00:58:32):

Because the way that the OpenMV AE3 came about, was these bugs were so bad. We were running into so many issues. Because this is a brand new chip, by the way. This is a new processor.

CW (00:58:42):

Yeah. That is the issue. It is beta. When I started using it, it was beta silicon.

KA (00:58:48):

Yeah. Yeah. Beta silicon. They have finally got to production grade silicon now. So a lot of these bugs you will not encounter anymore. They fixed them. But we were in the full bore of that.

(00:58:56):

What we did actually said to ourselves, "Okay, we put so much time and effort into this chip, and trying to make a product out of it. We need to ship something." We were just like, "Hey, what can we do to ship it?"

(00:59:10):

And it is like, "Well, okay. If we remove all of the features that make the regular OpenMV Cam fun, like the removable camera modules, and we just make everything fixed. Then there is a possibility, a hope, that we could actually build a product that makes sense."

(00:59:27):

So we were just like, "Okay. Removable camera module gone. Let us just make the camera module fixed. I/O pins gone. There are a lot of issues with the peripherals. Let us just get rid of that. Make it so there are minimal peripherals, minimal I/O pins exposed. This way we do not have to solve two billion issues."

(00:59:43):

So we just went down the line, basically just fixing everything. Instead of it being super, super having every single feature possible exposed and it usable. We just said, "We are cutting this, cutting that. Cutting this, cutting that."

EW (01:00:00):

Removing your flexibility, in order to optimize.

KA (01:00:03):

Yeah. We reduced the flexibility, versus the N6 is super flexible, can do all this stuff. The AE3, we removed a lot of the flexibility. But that ended up actually creating one of our best ideas ever. I share this with the audience just to say, "Hey, good things come out of going on a bad journey," basically.

CW (01:00:23):

I am also constantly in favor of constraints. I think constraints can actually enhance creativity sometimes. And lead you places you would not necessarily have gone, if you had tried to just solve every problem, or be a general thing.

KA (01:00:37):

Yeah. For us, what we decided was, "Okay. Well. We do not know how to- So much stuff is having a trouble on this chip. Let us just make it tiny. Everyone likes tiny. Use the smallest package they have. Just reduce the features. Not going to try to use every I/O pin." That actually yielded a lesser cost.

(01:00:57):

But then we started to do fun stuff, and level up our abilities. We were like, "Okay, well I guess if we are going to make it tiny, we are going to use all 0201 components. We will use all the tiniest chips." Over the course of- I think it took me about three weeks or so, to design it originally. We managed to cram everything for this camera into a one inch by one inch form factor.

(01:01:23):

Just talking to people and showing this off, pretty much, everybody has been blown away. They are like, "What? This is a camera that is one inch by one inch. You have got everything on there. Processor, GPUs, NPUs, RAM." What makes the Alif chip so special is it has 13 megabytes of RAM on chip, meaning you do not even need external RAM to use the system, like all of these things.

(01:01:45):

Yeah, that emerged through this weird process, where we thought we were going one direction and going to make a normal system, and ended up somewhere entirely different.

EW (01:01:57):

You have made a system- Okay. So I have to admit, the N6 super cool addition to your product line. Makes a lot of sense. But the AE3 just gives me ideas.

KA (01:02:09):

Goosebumps. Right?

EW (01:02:10):

It makes me think about things differently, like different directions. One by one is too big to swallow, but there are a lot of places you could fit such a self-contained system.

KA (01:02:25):

Yeah. Yeah. That is why everyone has been- I am glad we ended up at making this tiny camera, because I would not have gotten there. I was constrained by my own thought process, on what our system should be, given our previous form factor.

(01:02:36):

But yeah. Now it is like, yeah, it is legitimately small enough to have put it inside of a light switch. Anything you can think of. One inch by one inch fits almost anywhere.

EW (01:02:47):

Then your problems go back to how do you light it? How do you light the image well enough, that you can use the machine learning? But that is a separate issue that is everywhere.

CW (01:03:00):

Do you have an IR filter?

EW (01:03:02):

It does for some of them. FLIR boards that look really cool.

CW (01:03:06):

Right, but you can use an IR flood.

EW (01:03:09):

Oh, you mean just take out the little lens.

CW (01:03:11):

You can use an IR light.

EW (01:03:14):

Oh. Oh.

CW (01:03:15):

It takes more power.

EW (01:03:16):

That people cannot see, but-

CW (01:03:17):

But on detection, you can illuminate with IR.

KA (01:03:21):

Well. You could do that. We potentially might make a different variance of the AE3. Potentially. I am not saying we are going to do that. But you could use different cameras. Like we are supporting this new camera sensor called the "Prophesee GenX320," which is say an event camera. It only sees motion. So pixels that do not move, it does not see.

(01:03:40):

This one can work in very dark environments. It is a HDR sensor, so it can work in bright and dark environments. That one, for example, is also very privacy preserving, because literally it does not see anything but motion. So pixels do not really have color. So if you just wanted to track if someone is walking by or something, that one could be used for that.

(01:04:00):

Also, because we are using- Thanks to our good relationship with PixArt, we actually have data sheet access for these cameras, and support.

CW (01:04:10):

How? How did you do that?

EW (01:04:10):

<laugh>

KA (01:04:10):

I know, right? It is great.

CW (01:04:12):

You can never talk to camera manufacturers.

KA (01:04:15):

Yeah. But we actually have the field application engineers on the line. We ask for help and they respond.

CW (01:04:20):

They probably tell you how to initialize them.

KA (01:04:22):

Yes! We got all of that! It was amazing!

EW (01:04:23):

<laugh>

CW (01:04:25):

Dammit!

EW (01:04:27):

Christopher is jealous.

CW (01:04:28):

So much time trying to get just cameras up and running. It is just so painful.

KA (01:04:32):

For the audience, if you do not know about this, a big camera everyone uses is the OmniVision stuff-

CW (01:04:40):

Yep.

KA (01:04:40):

Because they built so many of them.

CW (01:04:40):

Yeah <sigh>.

KA (01:04:40):

And OmniVision provides no help support whatsoever, to anyone using their products, who are not cell phone vendors. So you have to reverse engineer everything, from a data sheet that has basically no descriptions.

EW (01:04:54):

Or you pick it up from the internet, and you send it a random set of bytes that you do not know what they mean. But you know if you change it, something might go horribly wrong. Or it might get better. You do not know.

KA (01:05:02):

Things go horribly wrong though. That is the thing about these cameras, is that for the bytes, you try to figure out what is the minimum set, and then you realize that the default register settings do not work. It does not even produce images or function at all, with the default settings on power on. You have to give it a special mixture of bytes. <laugh>

CW (01:05:21):

Some of which are undocumented. They are just bytes.

KA (01:05:23):

They are all undocumented, almost. They have reserved registers should be writing to. It is like, "What is this byte pattern to this reserve register mean?"

CW (01:05:28):

Ah, yes. I did manage to get the data sheet for the OmniVision camera we were working with, which helped some. But there was 80% of the registers written to, were not in that data sheet <laugh>.

KA (01:05:39):

Yeah. Yeah. No, it makes it challenging. In particular, what is challenging about that is you cannot do stuff like actually being able to set your image exposure correctly. So with these two cameras, for the N6 and AE3, we can actually control the exposure, control the gain, trigger the camera. All the features you would want out of a global shutter- Oh, also change the frame rate. Like everything you want to be able to control, we actually have the ability to control, precisely and correctly now.

EW (01:06:07):

We have kept you a little bit past the time, and yet I still have listener questions. Do you have a few more minutes?

KA (01:06:13):

Yeah. Let us go into them.

EW (01:06:14):

Matt, who I think might actually be able to help you, if you answer this question correctly, "If you were to wave a magic wand, and add three new and improved features to MicroPython, what would they be?"

KA (01:06:29):

Yeah. Well, we actually have been waving that. We are working with Damien George from MicroPython directly, to help launch these products.

(01:06:35):

We have been big supporters of MicroPython from the beginning. Each purchase of an OpenMV Cam actually supports the MicroPython project. We actually want to fund this, and make sure that these systems, when we are able to sell products based on MicroPython, MicroPython is also being financially supported.

(01:06:58):

What we have worked to improve, for example, is the Alif port is actually being- Damien has helped directly with that. So you will find that support for the Alif chip is actually going to be mainstreamed into MicroPython with MIT licensing. Our special add-ons for image stuff will be proprietary to OpenMV, but we will be mainstreaming the default Alif setup. So anyone who wants to use this Alif chip now, will have someone else already fixed all the bugs for them.

EW (01:07:26):

<laugh> You are welcome.

KA (01:07:27):

So you will not have to fight through the giant- You will not have to wade through all of the crazy problems. That will have been done for you, and generally available to everyone in MicroPython.

(01:07:36):

Similar, the same thing for the N6, the general purpose support for that. We are bringing these things to the community. People are going to be able to use a lot of the features we are putting efforts on, and do things with them.

(01:07:50):

There is also a new feature to MicroPython, that we have supported, called "ROMFS." This is very, very cool. Remember how I mentioned those neural network processors execute models in place? So the way we actually make that easy to work with, is that there is something called a "ROM file system" on MicroPython that is about to be merged.

(01:08:12):

This allows you to- Basically, you can use desktop software to concatenate a whole bunch of files and folders, like a ZIP file. You can then create a ROM file system binary from that. Then that can be flashed to a location on the microcontroller. Once that is done, then it appears as a /ROM/<your file name or folder name>directory. So you have a standard directory structure, that can be used to get the address of binary images in flash.

(01:08:44):

What this means is that we can take all of the assets that would be normally baked into the firmware, and then actually put them on a separate partition, where you can be updated and your program then just references them by path versus address.

(01:08:57):

This actually allows you then to technically ship new ROM file system images. That could be new models, new firmware for your Wi-Fi driver, whatever, and et cetera. It is a very powerful feature.

EW (01:09:11):

I love this. You could also- You said, "Assets," and so I immediately went to display assets.

CW (01:09:17):

Mm-hmm.

KA (01:09:18):

Whatever you want, and it is mapped into flash, so it is memory-mapped.

(01:09:24):

Yeah!

CW (01:09:24):

That is nice.

KA (01:09:24):

Yeah.

EW (01:09:25):

That is awesome.

CW (01:09:26):

We are finally entering the nineties.

KA (01:09:31):

<laugh>

EW (01:09:31):

<laugh>

KA (01:09:31):

Since everyone is listening here, right now what you have to do, is you have to bake things directly into the firmware. But this means the address of that stuff always changes, constantly. So it means you have to change the-

CW (01:09:40):

Or build own weird abstraction thing, that has- Yeah. That is mapping addresses.

EW (01:09:45):

Do not say it is weird. It is perfectly fine. It was an image library. <laugh>

KA (01:09:50):

No. But it is super helpful. It is actually a problem, because if you think about the FAT file system, the problem with FAT file systems, is they get fragmented, right? Files are not stored necessarily linearly. Every four kilobytes or so is chunked up, and they can be located all over a disc, so it is impossible to execute those in place. So it is a magically good feature.

(01:10:12):

One other thing we are working on, which is a request for comment I guess right now, but hopefully it will get worked on, is the ability to have segregated heaps in MicroPython. What I mean by this is right now you have one giant heap, and it has all the same block size. This is a problem because, guess what? On the OpenMV Cam AE3, we have a four megabyte heap on chip, just for your application. Four megabytes.

(01:10:41):

What this means now is that you can just allocate giant data structures, and do whatever the heck you want. But it also means that if you are just storing the heap as 16-byte blocks still, that you actually have a lot of small allocations, so it takes forever to do garbage collection then.

(01:10:55):

What we are trying to do is have it so MicroPython can have different heaps and different memory regions. Where you have small blocks in one heap, larger blocks on another heap, and then the heap allocator will actually traverse through the different heaps, looking for blocks that make sense on a size to give you.

EW (01:11:12):

All right. You talked there about features that are going into MicroPython, and that last one was a new feature.

KA (01:11:19):

We are trying to get that one. That one has not actually been implemented yet. That one is one we want to see happen though, because it is important for dealing with megabytes of RAM now. Which is like, we did not think we would ever get there, right? Everyone would be stuck on kilobytes of RAM with MicroPython. But nope. Megabytes now.

EW (01:11:35):

Why would you want more than 42K of RAM?

KA (01:11:37):

640?

CW (01:11:39):

It is very funny that we are talking about still microcontroller level quantities of RAM and ROM.

EW (01:11:44):

I know. We are still talking about megabytes.

CW (01:11:46):

With this huge thing on the side, that does a ton of compute.

EW (01:11:49):

Yes.

CW (01:11:50):

Some portions are moving up to desktop class, and some are still stuck in the- That is just the way it is. If you want it to be cheap and small, some things are going to have to be cheap and small. But it is nice that some things are advancing.

KA (01:12:05):

All right. I think we got two more listener questions?

EW (01:12:09):

Couple more, yeah. Simon wanted to know more about edge inferencing and optimization, and all of the stuff that we did kind of talk about. But we did not mention one of the features of the N6, that you are very excited about.

KA (01:12:24):

Yeah, yeah. The STM32N6 actually has an amazing new hardware feature, which is, it has got H.26 hardware encoding support onboard.

CW (01:12:32):

Oh, very good.

EW (01:12:34):

What does that mean for those of us without perfect memories?

KA (01:12:38):

Oh. Yeah. It can record MP4 videos.

EW (01:12:39):

Oh, yes!

KA (01:12:42):

This means you would no longer need to be running a system that has Linux on board, to have something that can stream H.264 or MP4 videos. I mentioned the N6 has the NPU, so it has got the AI processor. It has got something called the "ISP," the image signal processor, so it can actually handle camera images up to five megapixel with zero CPU load.

(01:13:02):

Then with the H.264 on board, it can then record high quality video streams. That can either go over ethernet. It has got a one gigabit ethernet interface on board. Again, a microcontroller, with one gigabit ethernet.

CW (01:13:14):

It is so funny. We are putting these high powered things all around the edge of this tiny little CPU. <laugh>

KA (01:13:19):

Well, that is the thing. The CPU is not tiny. It is 800 megabits of vector instructions.

CW (01:13:22):

Well. It is the RAM and stuff. Yeah. I know. I know.

KA (01:13:24):

It outperforms A Class series processors of yesteryear, right? It is new tech.

(01:13:30):

But then you also have Wi-Fi and Bluetooth. So you could stream H.264 over the internet, or you can send that to the SD card. Even the SD card interface has UHS-I speeds, which is 104 megabytes a second or so. Yeah, you can push that data anywhere you want, and actually do high quality video record.

CW (01:13:50):

Amazing.

EW (01:13:52):

Okay. Tom wanted to ask about OpenMV being the Arduino of machine vision. But then that led me down to a garden path where you actually worked with Arduino. Could you talk about working with Arduino, before we ask Tom's question?

KA (01:14:09):

Yeah. Working with Arduino has been excellent. Really, really good. Thanks to their support, we really were enabled to level up the company, and get in touch with customers that we never would have met.

(01:14:19):

Obviously as a small company, people do not necessarily trust you. Having Arduino be a partner of us, has really helped us grow, and meet different customers who are doing some serious industrial applications, that would not have considered us otherwise. So it has been a really good deal.

(01:14:37):

We are super happy for Arduino, and working with them actually. And we are super glad that they really supported us in this way, and that they wanted to work with us.

EW (01:14:47):

This was the Arduino Nicla Vision? Now Arduino is manufacturing it?

KA (01:14:52):

Yes. Yes. The Arduino Nicla Vision, that was where we have been working with our partnership. Also the Arduino Portenta, that is where we started. We also support the Arduino GIGA. Those three platforms run our firmware. We basically make those work, and have really shown off the power of those systems.

(01:15:11):

I think you had something else you wanted to mention.

EW (01:15:17):

Well, I did. But then I wanted to ask about Arduino versus MicroPython, because I cannot actually go in a linear manner.

KA (01:15:25):

Yeah, well this is one of those diversive things I wanted to talk about a little bit more. It is not really Arduino. It is really more of a C versus MicroPython. Where do you think things are going? The best way I would be to say that is, again, I mentioned that these microcontrollers have megabytes of RAM for heap now. Megabytes of RAM.

(01:15:50):

I mentioned that we can allocate giant data structures. We have neural network processing units. We actually have something called "ulab," which is a library that looks sort of like NumPy, running on board the OpenMV Cam too, which lets you do up to a 4D dimension array processing.

(01:16:12):

This is quite useful for doing all the mathematics for pre and post processing neural networks. And also doing- Like, you want to do matrix multiplications, matrix inversions, determinants. All the things you would need to do linear algebra to normally solve- To do signal processing and sensor data processing. All of that is on board.

(01:16:33):

A question I posed to many people, is where do you want to be doing all of that high level mathematical work? Do you still want to be writing C code to do that? Or would you like to be in Python?

EW (01:16:47):

Python <laugh>. So much Python. I want to be in Python. I am going to be in Python. I want to be in a Jupyter Notebook on Python <laugh>.

KA (01:16:54):

Well, that is the beautiful thing with MicroPython and with NumPy-like support. This means you can actually take your Jupyter Notebook code almost directly and run it onboard the system.

EW (01:17:04):

Right!

KA (01:17:06):

So this is where I think things are going, which is it is just now that we are starting to get free of the limitations on- Again, like the Alif Ensemble chip, that is a six by seven millimeter package with 13 megabytes of RAM on board. And the OpenMV Cam and then the N6 processor. That comes in different package sizes, but you can get one down to a similar size and footprint. It has less RAM on board, so it might need some external memory, but still comes with four megabytes on chip.

(01:17:36):

When you look at these things, it is like, yeah, we are in a new world, where it starts to make sense to actually invest in standard libraries and higher level programming. The best way to say it would be, "What is the default FFT library for C?" Is there one?

EW (01:17:54):

CMSIS?

CW (01:17:57):

The one I wrote?

KA (01:17:58):

Yeah, the one you wrote.

CW (01:17:59):

For whatever project I am running.

EW (01:17:59):

No! At least use CMSIS.

KA (01:18:01):

Well, you have got CMSIS.

EW (01:18:04):

It may not be efficient, but it is okay.

CW (01:18:06):

The one out of "Numerical Recipes in C" from 1984.

EW (01:18:10):

<laugh>

KA (01:18:12):

That is the challenge is that- Okay, let us say there is that one exists. What about doing matrix inversion?

CW (01:18:19):

Yeah. Yeah. Yeah.

KA (01:18:21):

Which one are you using there? These are all things where having a standard package- But then also you can accelerate this Helium. So all of these standard libraries that are in MicroPython, could be Helium-accelerated under the hood.

(01:18:33):

Now you are talking about a system where developers can all work on the same library package and improve it. Everyone can utilize that and be efficient at coding. Getting more done with less mucking around. Versus rewriting all your C code with Helium, which is a recipe for lots of bugs, and a lot of challenges if you are doing it brand new from each system.

EW (01:18:57):

How often does somebody suggest you rewrite the whole thing in Rust?

KA (01:19:02):

Ah. Actually, not so much. I hear about all these conversations theoretically. I do not know though if I hear too many about them actually from practitioner standpoint.

EW (01:19:10):

Okay, so Tom's question. Tom assumes you are targeting some low cost hardware. Is there some higher end hardware also? He wants a global frame shutter that has been tested to just work, in the spirit of Arduino. He also wants a lens upgrade, a Nikon mount and a microscope mount.

CW (01:19:30):

He is shopping on our podcast?

EW (01:19:32):

I know. This is like, "Okay. Now I want to go to the lens aisle, and then-" But you have a lot of these.

KA (01:19:41):

Yeah. I would say we are targeting global shutters by default now, in our systems. We just are going to have that.

CW (01:19:46):

What that means for people is-

(01:19:47):

Yes, please.

(01:19:48):

Rolling shutter. It is the way their camera reads out the frame. Cheaper cameras, actually most cameras, but electronic digital sensors, do rolling shutter. Where they will read out rows of the image kind of slowly, not all at once. So if there is motion or something while it is reading out the image, you can get this weird artifact where something that is moving-

EW (01:20:17):

My head is over here, and my body is over there.

CW (01:20:19):

Not quite. But something moving might be diagonalized.

EW (01:20:21):

Yeah.

CW (01:20:22):

Right. Because some part was here while it was reading out, and now it is over here, so it is skewed. Global shutters read out the entire- They capture and readout-

KA (01:20:33):

They expose the image at once.

CW (01:20:33):

Yes. That is it. They expose the entire thing at once, instead of by rows. Yes. It is getting tripped up at readout and exposure.

KA (01:20:41):

Yeah. Yeah. But we are going to make that standard now though. We think it should be, because we try to do machine vision. We are not trying to necessarily take pretty pictures. But we will also have a regular HDR rolling shutter for high megapixel counts.

(01:20:53):

Then we actually are working on plans for two megapixel versions of the camera soon. Those will launch later. But we have a path forward for actually increasing the resolution for the global shutters.

(01:21:08):

Regarding the Nikon mount and microscope mount, given our small size of our company, probably going to leave that up to the community. But we think people are definitely going to be able to build that though, if we see- Once the system gets out there, we see people making these things.

EW (01:21:24):

That is just a 3D printer thing, right?

KA (01:21:25):

Yeah. Because we have 3D files now in CAD for everything we do. Those are available.

EW (01:21:31):

So you just take that and you take your microscope, and you print up what you need to translate them. Says the person who did not even look at the new 3D printer that arrived at the house.

KA (01:21:45):

<laugh>

EW (01:21:46):

Anyway. You already do lens shifts. I saw a bunch of those.

KA (01:21:52):

Lens shifts?

EW (01:21:53):

Lens upgrades. You have different lenses already.

KA (01:21:54):

Yeah, we do lens upgrades.

EW (01:21:56):

Not for the AE3, because that is the super small one and it has been optimized. But for just about everything else you have.

KA (01:22:02):

Yeah. For the N6, tons of lenses. We actually have a lot of features for the N6. We have got thermal camera. We are actually going to have a dual color and thermal camera, so you can do a FLIR Lepton and a global shutter at the same time. So you can do thermal and regular vision at the same time.

CW (01:22:16):

Nice.

KA (01:22:16):

We also have got a FLIR Boson, which is high resolution thermal. Then we have the Prophesee GenX320, which is event camera. Then we have got an HDR camera, as I mentioned, five megapixels, which will be like your high res camera. Then it comes with the default one megapixel global shutter. So you have got options.

(01:22:33):

Then all of those besides the GenX320 have removable- Sorry. The regular cameras, the color ones, just have removable camera lenses, so you can change those out too.

CW (01:22:41):

All right.

EW (01:22:43):

All right. I have to go write down some of these ideas that I have gotten through this podcast, with what I want to do with small cameras and different cameras. And microscopes and moss and tiny snails. Do you have any thoughts you would like to leave us with?

KA (01:22:59):

Yeah, no. Just thank everybody for listening. We are excited with what people are going to be able to do with these new systems. If you have a chance, check out our Kickstarter. Take a look and buy a camera, if you are so inclined.

EW (01:23:15):

Our guest has been Kwabena Agyeman, President and Co-Founder at OpenMV. A Kickstarter for the AE3 and the N6 just went live, so it should be easy to find. You can check the show notes to find the OpenMV website, which is openmv.io, so it should not be hard. There are plenty of cameras there that you do not have to wait for. Along with all of these other accessories we have talked about.

CW (01:23:42):

Thanks, Kwabena.

KA (01:23:42):

All right. Thank you.

EW (01:23:45):

Thank you to Christopher for producing and co-hosting, and not leaving in that section that he really, I hope, cut. Thank you to Patreon listener Slack group for their questions. And of course, thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.

(01:24:00):

Now a quote to leave you with. From Rosa Parks, "I had no idea that history was being made. I was just tired of giving up."