404: Uppercase A, Lowercase R M

Transcript from 404: Uppercase A, Lowercase R M with Reinhard Keil, Elecia White, and Christopher White.

EW (00:00:06):

Welcome to Embedded. I'm Elecia White alongside Christopher White. Our guest is Reinhard Keil. If that name sounds familiar, have you ever heard of the Keil compiler?

CW (00:00:19):

Hi, Reinhard. Welcome to the show.

RK (00:00:22):

Hello, Christopher. Hello, Elecia. Thanks for giving me the time to talk to you.

EW (00:00:28):

Could you tell us about yourself as if we had met after you spoke at an Arm Conference?

RK (00:00:34):

Yeah. As you said already, my name is Reinhard Keil. Today I'm working for Arm on the technology for embedded. That includes, of course, also IoT and machine learning. But I started my professional career a long time ago, actually at Siemens in the semiconductor software. Today that would be called Embedded Tools Division, but at the time embedded was not invented.

RK (00:01:00):

As most of you know, together with my partner, I created a startup company where we were focusing on tools for embedded. We are well known for the C51 compiler, you mentioned it already. But that's by far not the only product we made. When we sold the company to Arm, we had distributors in 40 countries and a large user base. During my time in the Arm, I headed the Keil MDK Team and initiated CMSIS, the software standard for micro controllers. Nowadays, I'm part of a team that defines the next generation software and tooling and we believe that cloud native development will become important.

EW (00:01:45):

We have so much to talk about with all of that. Do you mind if we do lightning round first?

RK (00:01:51):

No. Go ahead please.

CW (00:01:54):

How do you spell Arm? Which letters are capitalized?

RK (00:01:58):

Oh, today we actually have lowercase Arm as you in the logo. And in writing, we uppercase the A and then lowercase R M.

CW (00:02:10):

I've been doing it wrong for a long time.

RK (00:02:13):

Yeah. Before it was actually all capitalized. So four years ago we changed the logo and a long time ago it was called Arm Risk Machines or Advanced Risk Machines, but going to the stock market, Advanced Risk is not a good name. Yeah. That's the reason why Arm is basically just Arm.

EW (00:02:41):

As someone whose name is misspelled often, I have some sympathy with your name. What is the worst misspelling you have ever seen?

RK (00:02:51):

Oh, well. Of course, when you talk to English people, then they don't get Keil right. They say every, all kind of fancy things. Keel is the most prominent one. I'm used to it so I can cope with it. When it comes to Asia, of course, they have all the problems with my name, but it is what it is.

CW (00:03:17):

What's your favorite C keyword?

RK (00:03:21):

C keyword, oh, that's really a hard one. I don't have a favorite C keyword in reality. So #include, of course, is the keyword where you can include a lot of redefined stuff. So that's maybe the most powerful one if you'll see it this way, but it's a predecessor keyboard.

EW (00:03:45):

And do you have a favorite processor of all time?

RK (00:03:47):

Hmm, I would say my favorite one is a Cortex-M4 faced microcontroller, maybe an ST1, a ST32, F4 is really a cool microcontroller from my perspective.

EW (00:04:02):

I could see that. That one is that was a good one.

CW (00:04:05):

I agree with that. You started your compiler company in 1980, what was it?

RK (00:04:16):

Yeah. Actually we started '82.

EW (00:04:18):

82.

RK (00:04:18):

But at the time it wasn't were a more a hobby than a company. So I was a student at that time. And then I was a nerd, an electronics nerd, perhaps I am still a nerd these days. Who knows? And at the time we focused on electronics and as I said, the company was somewhat a hobby. Actually we didn't start with first compilers. All our first product was a telephone switchboard and complete solid state, no micro controller included.

EW (00:04:55):

When did you first start selling a compiler?

RK (00:05:00):

Compiler? We have, we brought it to market in 1988. So this was six years later.

EW (00:05:11):

Did you just sit down one day and say, "Enough with the telephones, I want to write a compiler?"

RK (00:05:19):

And it was a journey. So we knew that we need a micro controller in a feature at phone system. And the problem was an Intel development system at the time was the price of a sports car. So I got a job at Siemens, that was luck and in the tools division. I had, of course, some friends that actually helped me to get there. And it was a part-time job. I learned a lot there.

RK (00:05:46):

At first I had the idea we create an operating system that could run the Intel development software on CPM computers, this better computer at the time, it was an 8080 based system, quite low power compared with today's standards. The operating system was in our first commercial software product that allows us to work with professional tools.

RK (00:06:15):

At that time, writing a compiler for that sort of computer was a high end compiler and a high end processor. But now the compiler is more often used for embedded systems, which are the smaller computers, the resource constrained computers. When did you make the decision or how did to make the decision to stop focusing on the high end and go towards low end?

RK (00:06:46):

Actually high end was X86 at the time. The low end was 8051. To some extent Intel claimed that the 8051 will be replaced within 16-bit micro control, somewhat, Iā€™d not bought into that because the 8051 was a cool chip, was pretty cool for the telephones, which ports that we built.

RK (00:07:13):

So we realized that actually many of our customers, we are using our operating system to develop 8051 based applications. Before we went into building a compiler, we developed an assembler and a deeper with integrated simulator. We started partnerships with emulate companies that in combination of technology and strategy alliances, let us expand the business. We also tried to build an assembly for the internal 80 96 for the 16 bid controller. But soon we realized we need to focus on one target because we all at the time, and this was the reason why we focused on the 8051.

RK (00:08:03):

Initially I didn't have the plan to create a compiler. I wanted a partnership with a company that had already one and I called up a company in Germany that had a commercial C compiler and suggested that we improved the product together. But the guy didn't see the value of this kind of partnerships and did not believe in improvements that I was proposing.

RK (00:08:27):

Then in 1986, we decided to write our own compiler. To be fair, most of the idea came from a friend, Peter, who then was working with me until he retired. And his idea was to create a compiler from scratch. I met him during my time at Siemens and I was focusing on the go-to-market plans. This is how I would call it today. This was basically what kind of partnerships do we need once we have the product and Peter was focusing on getting the compiler, but the project was very complex and intense.

RK (00:09:09):

It was, I would say five to ten times more complex than what we did before. And so I helped him then in the last years. Totally, it took us two years to get to this compiler. And I was then focusing on the core generation part of the compiler, and we brought it to market at the Electronica Exhibition in 1988. This was basically the time to market window that we had. This exhibition was every two years. So we were under time pressure to hit the time spot. And yeah, we partnered with four emulator vendors to showcase our compiler. It was an instant success [inaudible] from there.

CW (00:09:53):

When you wrote the compiler that was early days. So did you have to build your own parser? And I back up, I was involved in a compiler project about 10 or 15 years ago. And the parser we built with Lex and Yacc and things that were freely available, it probably wasn't such a wide range of available tools back then, right?

RK (00:10:16):

Yeah. We used Yacc at the time. So actually Peter was very innovative also at that time, but the whole thing is written from scratch. Yes.

CW (00:10:26):

Amazing

RK (00:10:27):

No base software that we have used to create it. There was a book that teached compilers, but the material was pretty bad. We also used the dragon book. It was a famous book, but the dragon book had all these algorithms in a way where resources were not a problem. But at the time resources in a DOS computer was a problem. The complete computer had 640 kilobytes, not megabyte, kilobytes. And in this 640 kilobyte, there was the operating system. There was the compiler and the program to compile. So you can imagine that we had, the first version was actually not very optimized, but later on to actually grow the compiler performance, we had to write overlays and actually work with overlays to manage the memory constraints to stitch carefully.

CW (00:11:32):

I remember having to switch floppy discs when the linker was needed versus the assembler and the compiler. It is so-

RK (00:11:38):

Yeah. Yeah. Exactly. What it was basically for people that start today, it is hard to believe that the computer can run with four megahertz clock speed, but six megahertz were the eighties of the time. And the PC AT was the computer of the time, fastest system that you could get. And it was a six megahertz later then eight megahertz variant.

EW (00:12:07):

But you're compiling for the 8051. And I'm still boggled by the idea of putting an our RTOS an operating system of any sort on an 8051. Those are like, maybe I misunderstood. Were you putting operating systems on the 8051 or bigger systems?

RK (00:12:25):

No. The 8051 ran bare metal code. So basically most [used] design pattern is an endless loop. And however, we created also an RTOS system in the early nineties because, to be fair, the biggest support nightmare, we are the people that try to create an operating system for the 8051. The 8051 is not a stack-based machine. So actually operating systems usually work very well on stack based machines where you have stack addressing. The 8051 doesn't have that. This makes it so tough to create a compiler for it. And actually to work around that, we had to invent what is called compile time stack. So a stack layout that is actually arranged at the link time. And I think this was the innovation that made our 8051 compiler so different from others.

CW (00:13:25):

I didn't know that. That's amazing.

RK (00:13:27):

Yeah. 8051 actually is a challenging thing when it comes to C compilers.

EW (00:13:33):

I've used the 8051 extensively and the Keil compiler for it. And I don't know that I knew that.

CW (00:13:42):

He probably did his job very well.

EW (00:13:42):

Yes.

RK (00:13:43):

Yeah. Because we have hidden, all this complexity was, for somebody that writes vanilla C code, it was irrelevant. But inside and actually also when it came to operating systems, then actually you need to know the call tree at compile time. And this was the reason why in 1990, I wrote an operating system. It was more a demo operating system than a product. But later on, we decided to make a product. The field time reason was the support cases that we had.

CW (00:14:20):

Yeah. How do you switch threads without a stack?

EW (00:14:23):

It's just, okay. I'm not even going to try to follow that. So you had the Keil compiler for the 8051 and you mentioned Yacc. So you were aware of yet another compiler compiler, even though you ended up doing it from scratch?

RK (00:14:40):

No. No. We used Yacc, we used Yacc.

EW (00:14:42):

Oh, okay. GCC was first released in 1987.

RK (00:14:50):

Yeah.

EW (00:14:50):

What did you think of those crazy folks giving away their work?

RK (00:14:57):

To be fair, GCC was at that time not on our radar, we had six or seven commercial compiler companies that will had 8051 compilers on the market is with our real competitors. We are basically fighting with our commercial compiler competitors at the time. Of course, GCC is today very good compiler. We are using it a lot in Arm and it's definitely good. But for 8051, because of the challenges I mentioned, it was long time no competitor. I think SDCC the small device compiler variant came to markets or came to life early 90's beginning 2000. And at that time we were so far ahead and established that we didn't realize that there is a free compiler.

EW (00:15:52):

We were so far ahead, we didn't even see them.

RK (00:15:59):

Now. Really? Yeah. Okay. For X 86, for sure. GCC was there, but we used Microsoft tooling to develop our tools and GCC was at that time not important.

Christopher White (00:16:15):

And to be fair GCC, wasn't even doing that great for Cortex Arm until in recent memory, at least.

RK (00:16:24):

Until we brought it in house and fixed all the problems. Yeah. Arm does contribute a lot to GCC and make sure that it is a decent compiler for the processes. So we do a lot of work on open source tooling these days.

EW (00:16:39):

I remember the arm-none-eabi debacle where you had to choose that sort of thing and explaining it to people who made no sense until finally we all agreed on the same EABI. Yeah. So you use GCC and you charge for Keil?

RK (00:17:10):

Yeah.

EW (00:17:11):

How? And I understand because you need to make a living. I get that, but it is a very different model of the world.

RK (00:17:20):

And both coexist very well.

EW (00:17:26):

Okay. How do they coexist because they do seem so different?

RK (00:17:30):

Yeah. Well, today, as I mentioned initially, it seems when I was in Arm, it was actually in 2008 and CMSIS in the lowest level, in the CMSIS core, we have a lot of compiler macros that actually make it irrelevant if you work with a GCC or with an Arm compiler or with an IAR compiler to mention even our competitors, we call them partners.

RK (00:18:00):

So what we do is we have a software layer. We have a compiler at the end is not relevant. Of course, it becomes relevant when code size matters, where performance matters, where aspects like certification matter, or you where you actually buy a service from Arm not a compiler because we actually sell more than a compiler. It's not just a compiler that we provide. Therefore GCC and the commercial Arm compiler coexists very well.

CW (00:18:35):

You've mentioned CMSIS a couple of times and I think it's really, really cool, but I don't think we've described what it is. Could you describe what CMSIS is briefly?

RK (00:18:43):

Yeah. CMSIS is the nips for a Common Micro Controller Software Interface Standard. And it has today, I think nine components when I remember correctly. So it has a lot of aspects. It starts with the CMSIS core, which is the base framework for a processor for a micro controller.

RK (00:19:05):

But then we have also a DSP library. We have an RTOS abstraction layer. We have a neural network library. We have then on the tooling sides, thing that we call CMSIS Pack, we'll come to that in a minute. We have CMSIS drivers that provides API interfaces in a consistent way. Then we have delivery standards for symbols called SVD that gives you the [inaudible 00:19:37] in the debugger of the peripheral symbols. And we have a CMSIS DAP and our latest components or CMSIS Zone, which is here to partition multi processor systems or the trusts on secure and insecure areas. A CMSIS Build system that uses actually the same specs and makes it CMSIS compliant.

RK (00:20:04):

So there are a lot of components today. There is actually a blog which explains this very well. The blog has the title, "Which CMSIS components should I care about?" It's written by one of our support engineers. I can recommend this blog because it gives you basically an insight of what is relevant to a developer and not everything is relevant to a developer. Several of the components are the idea to help the silicon industry.

EW (00:20:35):

Yes. And some of those are really interesting. You mentioned SVD and DAP. The SVD is the part where other compilers, other systems, other debuggers use that because it describes all of the boards or all of the processors. So if you have a Cortex F4 and a Cortex L0, the definition, sorry, Cortex M0 and a Cortex M4, the description of what SPI is available on the ST versus the NXP, all of those are in the SVD files?

RK (00:21:15):

Yeah. This describes basically the user facing peripheral, which is this of the device. This is what the SVD file provides. And we have to, today more than 9000 different micro controllers described with SVD files and with device family packs, which has been basically collect the SVD files, add to its some head of files, drivers, configurations and the likes. So the ecosystem for CMSIS is immense. We have 60 different vendors that produce this deliverables, this CMSIS packs, and they are consumable by many IDs directly or an integral part of the IDs or for those companies, for example uses just the SVD file because they are focusing on the debugger.

RK (00:22:15):

They just use the SVD file that is part of the distribution. So you are right. SVD is one of the important components that is the debug access firmware, debug access protocol is what that stands for. And this gives basically a consistent way to talk to the core side register. And it's a lightweight firmware that can be adopted in many different flavors. And this configurable supports also the SDO trace, for example, and this is basically what we recommend to put on eval kits, but we use it also ourselves in the U link series of debuggers that are CMSIS compliant.

EW (00:22:59):

And the debug access port, the DAP software. It makes it so that any Cortex can debug any other Cortex. And that's why-

RK (00:23:14):

That's too simplistic. The CMSIS DAP is just a firmware that basically translates USB to DAP commands and commands that go over via USB is very simple. They are primitives: read memory, read registers, start executing. Even set the break point is a write register operation. So the debugger that then runs on the host computer, you see typically, or Mac or Linux, they translate, then the debug front end commands where you have symbols into this primitive, sends it via USB to the CMSIS DAP firmware. The CMSIS DAP firmware is working indeed run on the Cortex M and in this way, yes, we use a Cortex M and to debug a Cortex M but it is a lot more complicated than it appears.

EW (00:24:09):

Good. Those are two sections that most developers never look at because they are about tooling. And some of the ones that I've used include the DSP, which has Fourier and fake floating point numbers, Q numbers, I guess is what they're really called.

CW (00:24:31):

Fake?

EW (00:24:34):

You know what I mean. And I heard that Edge Impulse, the tiny machine learning folks also use your NN, your neural network pack. How do you decide what pack you're going to do next?

RK (00:24:50):

First of all, it's an evolution. So we consistently improve our components. Today, we release, I would say, every nine months and CMSIS pack where we improve what CMSIS does it comes to the DSP to pick on what you just said, we have new process of technology coming along, the latest processes, the Cortex M55, and it has actually a vector instruction set. We call it Helio. It's optimized for micro controllers. And its vector Instructure set makes parallel execution. It operates on vectors. A lot of the DSP functions benefit from it, but also neural networks. So neural networks, our CMSIS end library is the primitives for machine learning maps to this instructions set, and of course, we make it transparent because the same operations, you can also perform on Cortex M0.

RK (00:25:54):

Then of course, with the vector instructions yet, you have then only the standard temp instructions yet the M4 has SIMD the instructions, which already improve DSP performance quite a bit. But with Cortex M55, we really focus on DSP and machine learning performance. And this can then also be extended further with the Ethos-U processors, where we have to really high performance machine learning. So not only our neural network processor.

EW (00:26:30):

What was the processor name?

RK (00:26:32):

It was Ethos U. It was Ethos-U65 and 55. They can be combined into a microcontroller. And actually the [inaudible] device is the first deployment of such systems. Or when you take a look to [inaudible], they have Cortex M55 or Cortex A32, when I remember correct. And Ethos U65, basically designed for high end machine running on edge devices on end notes, sophisticated technology use these days.

EW (00:27:04):

Yeah. It's certainly very popular. So you also have the drivers, which I don't use as much, although I always want to, you would work so hard at making the DSP and packs so optimized, but it's the drivers, since you're supporting 9000 different processor, the drivers are not as easy to get optimized. How do you balance between flexibility and optimization?

RK (00:27:39):

Yeah. In fact, we partner with our ecosystem and when you take a look to the support of these 9000 devices, actually the support is developed by the silicon vendor. We teach them how to do this support. And with drivers, we didn't make as much in the roads as we hoped. Therefore we are bringing CMSIS [inaudible] into open governance. Actually, we did this last summer, and today we are working with ST and NXP to bring CMSIS Pack to the next level and something similar. We made move with drivers going forward so that we actually get drivers that are consistent across the industry.

RK (00:28:21):

With CMSIS Pack, this is our first project that we do in this fashion, in this cooperative way. We envision that actually the tooling from ST, NXP, and Arm will have the same base functionality. And then we will base our tooling on VS Code concept. So in Keil Studio, we use Theia, VS Code is derived from Theia [NOTE: Reinhard meant the other way around, Theia is derived from VS Code), this is an Eclipse, [inaudible] is under Eclipse foundation. We envision actually that the VS Code is one of the platforms of the future.

CW (00:28:57):

That's good to hear. I've been using VS Code recently and finding it quite nice in terms of extensibility and stuff.

RK (00:29:05):

Yeah.

EW (00:29:06):

And in terms of modern development environments, I've also been using STM32 Code IDE a lot.

CW (00:29:16):

Cube IDE.

EW (00:29:16):

Cube. Cube.

RK (00:29:18):

Cube tooling. Yeah.

EW (00:29:19):

And have to say VS Code is better. And that's the understatement.

RK (00:29:26):

Yeah. That's what we frequently hear. Eclipse based is not what is modern state of the art. And that was also the reason why we started Keil Studio. We had DS, Arm DS as an Eclipse base IDE, but it wasn't so popular in the micro controller industry and Cube is to my knowledge Eclipse-based.

EW (00:29:51):

It is. But MicroVision is pay for?

RK (00:29:59):

Yes. There are also free to use variance. It's not all paid for.

EW (00:30:06):

So we did get a question from a user, from a listener. (Sorry, user?) About when and why to pay for compilers versus donating to open source ones. Do you have advice for when folks should pay for compilers versus not?

RK (00:30:28):

Yeah. As I said before, you actually don't pay for a compiler. The compiler is part of often complete offering where you get a debugger where you get device support, where you get pretty much everything out of the box. And actually my partner in the US called me up over 20 years ago. And it was for him late in the night. And he called me and said, Hey, then, we are selling a feeling. And I said, John, no, no, we don't sell a feeling. How many margaritas did you have tonight? We are selling compliance. And actually he is right.

RK (00:31:07):

We are selling a feeling. We sell the feeling or the service to get your job done with our tools. This means when you repeat the problem, you can call up our support and we will help you and our partners are also here to help. If there is a problem somewhere, then actually we help.

RK (00:31:29):

Of course, today, partners also use open source compilers in their offering. You mentioned the STCube and there are many other free to use tools. But at the end of the day, it's a slightly different business model. It is part of the device offering. So the price tag is basically baked into the complete offering of ST, NXP and so on. So it is not so that the compiler is free. And when it comes to donate, you can of course, donate to open source communities. Please do so to actually help these communities. I say, they know we are leveraging open source a lot and we contribute. I think we are one of the most active contributors in many open source projects to name one, we contribute toward to tier. We actually help the open source industry, wherever it makes sense to us.

EW (00:32:33):

Including open source CMSIS.

RK (00:32:37):

Yeah, of course. CMSIS is also open source, but the reason why it is open source is not that it is free software. The reason why it's open source is we remove all barriers of adoption and when you want to combine it with another open source project, you can do so. When you want to use it commercially, you can do so. We don't put any roadblocks in using this foundation software.

EW (00:33:04):

And so we come to the free as in beer and free as in liberty and where, which is the general open source definitions of freedom in English don't always make sense.

CW (00:33:19):

And there is free as in puppies.

EW (00:33:20):

Yes. Many open source projects are free as in puppies, but it is free as in, you can use it's free as in liberty, you can use it and change it and do whatever you want with it. But as you mentioned, it's not really free as in beer, you are paying for it via purchasing Arm processors.

RK (00:33:42):

For example. Yeah. Yeah. Exactly it. It is so we have many engineers that contribute to open source projects, I think in total about maybe more than thousands. And of course at the end of each month, they get the paycheck.

CW (00:33:56):

I think two of the things you mentioned are probably the key things for me when I'm considering paying for something, paying for a tool set is like you said, device support is huge, because that can often be really tricky, especially if you're not using one of the completely mainline, high volume parts. So it's nice to get up and running quickly if you've got a weird chip and secondly, to put it less diplomatically, it's nice to have somebody to yell at.

EW (00:34:23):

I think that's what he said. Yes.

RK (00:34:25):

Yeah. And you should also keep in mind, many of our customers are in industrial space to pick an example. And industrial is long living. We have users that use 10 year old compiler version because they started the project with it and they only make tiny modifications. So the risk to introduce a newer version is too high and we still maintain and help them to use these older versions. And this is basically the service that we provide. Part of the service that we provide

EW (00:35:03):

That software configuration management static-ness that is found in embedded systems more than I think any other is important. Being able to rebuild code that is time critical or FDA approved-

CW (00:35:21):

Or very old

EW (00:35:22):

Or just very old because you don't make a lot of changes, many modern compilers or modern systems, or heck let's just call Python a thing, don't have that. It's harder to-

CW (00:35:39):

Lock everything down.

EW (00:35:41):

Lock the versions down.

RK (00:35:44):

Not when you are using our tools. I mean, we have in MDK, you can say which tools I'm using and it gives you a list of the tools and the software components that you are using in your application. And you can basically file these tools and use it five years later. So I don't see the problem, it's basically product lifecycle management. This is the term that is used in the industry and it's part of project lifecycle management flow.

EW (00:36:18):

But that is one reason to pay for compilers is so that you get that because it is harder to maintain all of that yourself.

RK (00:36:25):

Yeah. Correct. This is, as I said, we offer not just compilers. We offer the service.

EW (00:36:32):

And as Chris pointed out, there's the service of getting started more quickly without having to go out and search things, especially when evaluating new platforms. I used mbed for a while for that, but things have changed there. What's going on with mbed?

RK (00:36:52):

Yeah. mbed was actually, or mbed is term that stands for many things. And mbed stands also for the cloud compiler that we have 10 years ago. We released more than 10 years ago the first cloud based compiler, it stands also for the operating system. And the operating system is involving, we are actually reworking quite a bit here to make it more fit for purpose in the cloud services that are relevant today as Amazon, Alibaba, Google, to name a few. So our cloud service providers, and we need to connect to those cloud service providers these days. And when it comes to tooling, the new name for what was called previously mbed studio is Keil Studio. And as the name implies, we want to put it out of the box, not only mbed-os centric, we want to make it generic for the whole range of diverse operating systems for all kind of debug tasks, modernize it so that it is actually can support CI flows and machine learning and the like.

RK (00:38:09):

So we have actually put the first implementation of Keil Studio into the cloud. It's a cloud service and it's today free to use. And the cloud service will always have a free trial to use. So there is always a way to get started with on based micro controls, without any price tag. And what it does, it gives you access today to 400 different evaluation boards. And it has actually, in the meantime, we have a debugger inter browser. So you can connect an eval board from ST for example, to the browser and you can debug and you will not realize that it's actually a browser based debugger. It performs very well. And what it does is you have no installation whatsoever to get started. Everything is clear. You just started with an example project and can compile, modify the code and evaluate the performance of the system and so on.

RK (00:39:18):

It's amazing. It's really cool technology. And nowadays we all talk about home office. Actually, I'm working still in my home office in Germany. These days, we are obliged to work at home when possible and what we envisioned it in future, we will have a lot of hybrid workers. So people that work a few days in the office and other days at home and with a cloud based solution, you can actually work everywhere you want. You just need a browser to get started and you have your development environment right with you. And this is the beauty of that aspect, but there are main the other aspects of cloud native.

EW (00:39:59):

Okay. Tell me a few of those. I can only think of disadvantages. So the ability to get to it from multiple places and different kinds of computers, I can see that as an advantage.

CW (00:40:14):

Well, and there's the no installation that's often.

EW (00:40:17):

Well, yes, the no installation and the free is always beautiful. What else do you have for the pluses column?

RK (00:40:27):

Okay. Let me give you a few. You are aware of git, I think. Git is a cloud storage repository hosting system. It's integrating studio. You can actually host your repositories in the cloud. Then we have virtual machines in the cloud. These days, we call this Arm virtual hardware. This is actually a server that you can connect to get CI up and running, and together with Github runners, or example, with Github actions, you can run an CI validation test whenever you commit something to your repository and test whether there is a side effect. Of course, it requires quite a bit of investment. And we try to lower this investment, the upfront investment to set up a CI system. But once you have it, it improves your productivity dramatically. Actually it's used quite heavily where safety critical comes to play automotive industry, for example, but we envision that it helps also the standard embedded developers going forward.

RK (00:41:38):

The next aspect is when you have a cloud based system that you, an IoT device, when you program an IoT device, then you are anyway connected to the cloud service provider to see what you device is doing. And when you want to deploy a firmware update to your device, then actually you can do this via over the air programming. And this gives you other benefits also. So you can actually bring a product earlier to market when the complete functionality is not implemented. And over time, you can actually extend the functionality of your distributed device in the field.

EW (00:42:22):

So are you doing distributed management?

RK (00:42:26):

Not we ourselves, but for example, AWS is such as service. And we work with AWS to use the OTA services they offer and to integrate and better.

EW (00:42:37):

Do you do any of the management or log collection?

RK (00:42:41):

No. As I said, Arm is about partnerships. We leverage the ecosystem. Actually you can, to a certain extent, say the AD ecosystem is our product. And therefore we work with partners to actively offer these type of services and we try to integrate them into our tooling, for example, so that they get easier to use and, or actively manage people and give you the benefits that I mentioned.

EW (00:43:10):

One of the things I liked about mbed was how many peripherals were supported and how much code there was that was easy to use, but it felt like there was a lot of weight to it. At some point it wasn't easy to use because there was too much. That seems like it can be a problem with any cloud support, any cloud thing that also supports many, many things, many, many boards, many, many processors, many, many-

CW (00:43:44):

Libraries.

EW (00:43:45):

All the things. How do you avoid the weight?

RK (00:43:52):

First, you are right mbed at a few [inaudible], let's call it this way. And what we are doing differently in the Keil tool offering is we leverage actually the work that silicon providers do to optimize the SDKs. When you use an NXP or an ST or Infineon device, all of the silicon vendors today provide an SDK. And this SDK has highly optimized drivers in it. They are actually optimizable in many different aspects. You can configure the DMA interrupt behavior. You can configure buffer sizes and and and. And this was written in a way where the code overhead is quite tiny. Of course, you can go and directly call the register interface if you wish to, but frequently you will not beat the implementation from a silicon partner.

EW (00:44:57):

Have you seen the STM HAL?

RK (00:45:01):

Yes.

EW (00:45:01):

Because I've spent a lot of time with it and I'm not sure optimal is the word.

RK (00:45:07):

No. It is. They have two different flavors of the holiday as a so-called LL low level driver hole, which is really quite optimal. And now of course you need to compare it, feel it. It is configurable and it has a lot of features. And when you compare it to mbed, you will see it is more optimum and therefore, and I mean more optimal also in terms of flexibility, it has DMA control in it and so on. And therefore the overhead is actually not dramatic for the features that it provides.

CW (00:45:51):

I would put it this way, I would certainly not, not use it. Wait, I would definitely use it. It can be tricky to use. And there are definitely corners that you can get lost in. But I did find that the low level was more what I'm used to.

EW (00:46:06):

It was, there was still a lot of checking, which is great if you're doing things the first time. But if you're running millions of DMAs in an hour, I don't need you to check things again and again.

RK (00:46:20):

Yeah. But this is an asset macro, when you compile without a debug switch, then it is compiled away.

EW (00:46:27):

So it wasn't always assert. Yeah, yeah. That doesn't matter. That's a totally not a point for you anyway. I should ask STM 32 folks.

RK (00:46:37):

But sure ST has room for improvements. And I am not here to defend ST.

EW (00:46:37):

No. No.

CW (00:46:44):

Like I said, I don't recommend not using it.

EW (00:46:45):

Right. Right.

CW (00:46:47):

Definitely use it if you're on STM.

EW (00:46:50):

Optimize the code, if you have to, but only if you have to.

CW (00:46:52):

Exactly. Back on the cloud stuff, because that interests me. I'm of two minds about it. I have my old man mine, which is what are you doing putting things in somebody else's computer. And then I have the future mind, which is, okay, this sounds interesting. And they're fighting. So one of the questions that comes to mind is, is this something you can switch back and forth? Can you have some people on the cloud service and some people on their desktops doing the same project?

RK (00:47:23):

Yeah. You can. Actually, what we have in the first incarnation is you can actually export projects into MDK and use the desktop tooling. MDK has about 250 man-years of investment in this IDE. And it will take us a few more months, let's call it this way, to get on par with Keil Studio with the MDK feature set. Once we have that we can deploy Keil Studio also in desktop, in Linux, Mac and windows versions.

CW (00:47:56):

Right.

RK (00:47:57):

And this means that you can choose, you can pick and choose whatever you want. You can decide, view it like Microsoft Office 365. In Office 365, I can decide to use an online version of work or an offline version of work. And actually switching back and forth is kind of seamless. And the cloud has this benefited. I mentioned it supports hybrid, working quite well. It helps you to avoid that you have to re-install computers and over, and therefore I think the flexibility to it that you get with it is the long term, the benefit will be. You still will have desktop computers in years to come with some setup that is for your project. But to get started with a new project, to evaluate a new system, you wouldn't spend days to set up a development environment on your local computer. Instead, you would use software as a service system, like Keil Studio and start right away. This is how we see the future when it comes to embedded programming.

RK (00:49:03):

And the other aspect. I hear a lot. Yeah, we are concerned about data security, but if you think about it, your computer is connected to a network. And the security that the big cloud service providers offer you is far more better than on your local computer. So they have a whole team that actually checks whether there is an attempt to attack the systems, because if this would happen, they would be in trouble.

EW (00:49:35):

Yes.

CW (00:49:37):

Let me ask one more question that goes back to something Elecia talked about with you a few minutes ago, which is the locking down various versions of things. Is that something that's easy to do, will be easy to do with the cloud service? "Okay, I need this compiler from two years ago and these libraries and that's documentable and traceable" or how's that going to work?

RK (00:50:01):

In our virtual hardware system, we work with so-called Amazon machine images and the Amazon machine images are versioned. And you can around an old version of a machine image for the latest version of a machine image. And the old version gives you the environment that you had two years ago, and you are not forced to use the latest and courageous. In client studio, we are not at this level yet. In MDK, in classic MDK, you can actually say, I want to use compiler from two years ago in my project. And it takes the compiler from two years ago, we envisioned it something similar. We will offer for a professional users of Keil Studio.

EW (00:50:48):

I'm really glad to hear you emphasize the evaluating new platforms, because one of my uncertainties about using the cloud compiler was the download time. I mean, I have to flash it. That always, it takes an irritatingly long time, even if it's only two seconds.

CW (00:51:10):

Two seconds, when you flash and it takes two seconds.

EW (00:51:13):

Small processors, but downloading, that just adds more time. And I didn't, but knowing that I could then put it on my computer would be helpful in that regard. Once I was finished with evaluating and wanted to get down to solid development, I really think that's important.

RK (00:51:36):

Yeah. And I think we have to offer this type of installations for a couple of more years until nobody thinks about cloud anymore because to be fair, I mean, I use Visual Studio Code on desktop and where I use Keil studio in the cloud. I sometimes forget that I use a browser version of the editor. The performance is almost identical and therefore it will become a habit to use cloud. I'm pretty sure in Arm, we leverage cloud services quite a lot. And I know that German automotive also uses cloud services quite a bit.

EW (00:52:22):

I do too. And yet maybe it's having used mbed that wasn't a pleasant part of the process.

CW (00:52:29):

Yeah. It was early days though. First time.

EW (00:52:30):

It's true.

RK (00:52:31):

Yeah.

EW (00:52:32):

But we do have really good network. I imagine Reinhardt also has really good network connectivity, which is not true of everyone in the world.

CW (00:52:42):

That's true. That's true.

RK (00:52:45):

Today, I wouldn't find that anymore. I think that in many countries you have decent network connectivity and therefore I wouldn't worry too much about it. And actually these days, pretty much everyone that does video conferencing and for a cloud based IDE, you don't need more and improved for video conferencing. Actually. I think you need less.

EW (00:53:16):

Yeah. I mean, I still have international friends who definitely, we don't video conference, we voice conference because that's the level of-

RK (00:53:29):

Yeah. But keep in mind the compilation is actually done on the cloud server. And the cloud server that Amazon provides is four or five times faster than my notebook that I'm using. It's an Arm notebook. Maybe not the fastest in the world, but even if I would get a very fast notebook, the cloud server would beat compilation time. When it comes to editing, yeah. Being with plays a role, but to be fair what do you download and upload is our source files with a few hundred kilobytes. It's not much that you need.

CW (00:54:09):

It does level the playing field in some interesting ways.

EW (00:54:12):

Yeah. It definitely does make our computers consoles again.

CW (00:54:17):

Why did I spend so much money on it?

EW (00:54:22):

You mentioned virtual hardware much earlier I think at the very beginning, and now that the cloud, now that you've explained the cloud, I suspect the virtual hardware... Is it simulated?

RK (00:54:36):

It's basically a simulated embedded device in the cloud.

EW (00:54:41):

Are you familiar with Wokwi?

RK (00:54:43):

No.

EW (00:54:48):

He simulates the processors for like the Raspberry Ri Pico, the RP 2040, 4080?

CW (00:54:56):

2040.

EW (00:54:57):

2040. And also ESP32. And ATmega, but he actually simulates the processors and has peripherals and runs code. And you can actually GDB your Arduino code in the web.

CW (00:55:16):

That's also in the cloud

EW (00:55:16):

Because it's fully simulated. Yes. That's also in the cloud.

RK (00:55:21):

Yeah.

EW (00:55:22):

Are you going to do something like that or something more like mbed where it was a simulator, but was really, it didn't simulate the hardware so much as it kind of simulated the hardware.

RK (00:55:35):

Yeah. What we do is we simulate basically a process, a system with some peripherals, and this is basically our offering and you can pick and choose which processor you want to simulate, so you can actually simulate Cortex M4 or an M7 or an M55 with Ethos-U, and you can test drive your algorithms on this simulator and make performance comparisons and the like. So it is to a certain next 10 to this year to help evaluate the different processors from on.

RK (00:56:13):

It helps you also in the software design item, when it comes to unit and integration tests, because this type of testing you can do on this type of simulators and we offer them as a cloud service. You can actually start when it comes to complex CI, you can start multiple instances. And when you run unit tests at scale, then you have typically many hundred tests that you perform and alone the flash download time on the real hardware is much longer, but what also cannot happen is, the real hardware when you flash it 10,000 times, this is about the life cycle. It's dead. A literal machine cannot die.

CW (00:57:02):

Wow. Not in the same way.

RK (00:57:06):

That is the beauty of it. And we position it today with CI, but going forward also with MLOps, MLOps is the development flow that you use for machine learning, where you need to optimize the algorithm that you deploy to your target system and machine learning optimization will happen anywhere in the cloud because of the compute intense machines that you or the compute intense algorithms that you need. So the machine learning the training of many node devices will happen naturally in the cloud steps where we think the validation is also better in the cloud. And once you have validated it in an MLOps workflow, you can actually deploy it to your target system.

EW (00:57:58):

Yeah. I can see that.

RK (00:57:58):

Yeah. To a certain extend, we look a little further ahead as the normally embedded developer needs today to get this job done. But we think about what is the need in two or three years in this industry.

CW (00:58:12):

Yeah.

EW (00:58:12):

I mean, that's how you stay in business because it's going to take you a couple years-

CW (00:58:18):

To do anything.

EW (00:58:19):

... To do anything. And then we're finally catching up and saying, oh, we need that.

CW (00:58:23):

Although, I still feel like embedded is in many ways lost in the past. So it's good to see some forward thinking stuff happening.

RK (00:58:33):

Yeah.

EW (00:58:35):

Let's see. I think we still have a couple of questions. Tom asked about Yacc, but I think we've covered that. Andre from the Great White North asked about who influenced you, Dennis Richie, Aho, Nicholas Wirth, Sethi and Ullman? Someone else? I only recognize one of those names.

RK (00:58:56):

Yeah. Nicholas with is the, to my knowledge, the author of Pascal. And actually when I started university, this was 1980 Pascal was the high level language of the days or in academics in the universities. The first language that we learned was Pascal and popular compilers were TurboPascal at the time. But to be fair, I have written tools in PLM until we started with the C 51 compiler of 1986. And PLM is actually very close to Pascal. PLM was the Intel flavor of an high level language. I would call it these days. It's actually an intelligent assembler because at the end of the day, the compiler wasn't that clever, but it was really a productivity gain compared to assembler that was dramatic. Therefore, I like Pascal a lot. Yeah. Then I think you mentioned, Dennis Richie, the inventor of the sea language. And of course I know him, not personally, of course, but I read his book inside out.

RK (01:00:08):

It was the Bible. And you have to keep in mind when we started with the C 51 compiler, there was no answers in that. This didn't exist at the time where at least we had no access to it. I think it was in the design in 86, it was not released officially. And therefore Kernighan and Richie was the go to book when it came to how the language should behave. And the other colleagues, Aho and so on, have written the dragon book, we call it dragon book because this was on the title cover of this book. It had very good collection of clever algorithms for compiler design, but with the caveats that they have not considered resources as constraint. So they were basically infinite resources that are available for the algorithms that they described. And the challenge was to map the algorithms to what was available at time. At that time on computer power.

EW (01:01:14):

It's been a pleasure having you Reinhard. Do you have any thoughts you'd like to leave us with?

RK (01:01:19):

Take a look to Keil Studio, take a look to what we are up to you have on the landing page, actually quite a bit of outlook, what we will do in the future. And I encourage you to take a look to these tools and explore it.

EW (01:01:37):

Our guest has been Reinhardt Keil, senior director of Embedded technology at Arm and founder of Keil software.

CW (01:01:47):

Thanks, Reinhard. This was a fun discussion.

RK (01:01:50):

Thank you for your time. Really a pleasure to meet you.

EW (01:01:54):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener slack group for questions. And thank you for listening. You can always contact us at show@embedded.fm or at the contact link on embedded.fm. When I say always, I mean, sometimes because it's been down for a little while, if you didn't get a response and you thought you should, please please do resend it. It's been down since NovemberI had a running compiler and nobody would touch it. .

EW (01:02:20):

Now a quote to leave you with from Grace Hopper: "They told me computers could only do arithmetic."