500: Nerding Out About Ducks
Transcript from 500: Nerding Out About Ducks with Komathi Sundaram, Christopher White, and Elecia White.
EW (00:00:06):
Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Komathi Sundaram. We are going to talk about testing, but we are going to talk about testing as if it was really exciting, because Komathi thinks it is.
CW (00:00:22):
Hi Komathi. Welcome to the show.
KS (00:00:24):
Thank you. Hi. Hi Chris. Hi Elecia.
EW (00:00:27):
Could you tell us about yourself as if we met, I do not know, at the Embedded Online Conference, if it was real- If it was in person and had a table we met for lunch.
CW (00:00:39):
Yeah, I am pretty sure it is real. <laugh>
EW (00:00:40):
It is not my imagination.
KS (00:00:44):
It is real. I think it is just a little bit virtual when you are recording, for me. I am Komathi Sundaram. I am a principal test engineer, predominantly working in embedded software testing side, and definitely lots of automation.
(00:01:01):
I did start my career as a software developer. But then I found testing more fun, because I am a curious person. So when I found something big, a big bug, I almost felt like, "Okay, it is worth investing my passion into testing."
(00:01:17):
So yeah, that is how I ended up at Embedded Online Conference. Because if they invited me, and accepted my application to talk about embedded software testing using hardware-in-the-loop benches, yeah, that means I made a good choice.
EW (00:01:34):
We want to do lightning round, where we ask you short questions and we want short answers. If we are behaving ourselves, we will not ask, "Why?" and, "How?" and, "Are you sure?" Are you ready?
KS (00:01:42):
Yes. Should I be nervous?
EW (00:01:44):
No.
CW (00:01:45):
Probably not.
KS (00:01:45):
<laugh>
CW (00:01:45):
Favorite animal?
KS (00:01:49):
Oh, that is an easy one. That is there in my logo. Hammerhead Sharks.
EW (00:01:55):
Favorite place to be a farmer?
KS (00:01:57):
Oh. Argentina, although I spend the most time being a farmer in Costa Rica. Because you get to drink wine in Argentina <laugh>.
EW (00:02:07):
But not in Costa Rica?
KS (00:02:11):
No. You are working. I was in the middle of nowhere, so there was no wine. <laugh>
CW (00:02:19):
<laugh> Complete one project or start a dozen?
KS (00:02:22):
Yeah, I prefer a balance. I do not think I have ever worked on one project, or a dozen, at the same time. I am not extreme. So I would say, "Two to three projects." I know it is not in the option. Sorry.
EW (00:02:34):
What is the most objectionable material you have ever used to construct a house?
KS (00:02:38):
Oh. Horse manure. <laugh>
CW (00:02:45):
Look, you use what is on hand, I guess <laugh>.
KS (00:02:46):
Yes. Yes, absolutely. You just have to mix it with sand, dirt and some hay so that it all sticks together. But basically the horse manure is the sticky thing, that holds things together <laugh>.
EW (00:03:00):
That is not going to be the show title. No, I can see your cogs turning.
CW (00:03:04):
I am not doing that. No.
KS (00:03:05):
<laugh>
CW (00:03:05):
Favorite fictional robot?
KS (00:03:10):
Oh. WALL-E. It is such a cute robot.
EW (00:03:16):
Do you have a tip everyone should know?
KS (00:03:18):
Yes. With respect to, of course, testing. I would say let the systems break early. Find ways to break the system early, so you can fix things faster, cheaper, and smarter.
EW (00:03:33):
This is the first time that "break things fast" actually really works for me.
CW (00:03:39):
Well, the idea is to find out what is going to break, not break it on purpose.
EW (00:03:43):
Oh.
CW (00:03:43):
No, I do not know. Right? No, I agree with you. I am agreeing with you.
KS (00:03:48):
Well, when you do that as a job, then you will have to break it on purpose.
CW (00:03:51):
Yeah.
EW (00:03:57):
<music> I would like to thank our show sponsor this week, Nordic Semiconductor. Nordic has a vast ecosystem of development tools. It helps us reduce our development cycles and accelerate the time to market. For example, the nRF Connect for VSCode extension provides the most advanced and user-friendly IoT embedded development experience in the industry.
(00:04:19):
Nordic's Developer Academy online learning platform, equips developers with the know-how to build IoT products with the Nordic solutions. Nordic's DevZone brings together tech support and a community of customers, to provide troubleshooting and assistance on technical topics.
(00:04:35):
Visit these key websites to learn more, nordicsemi.com, academy.nordicsemi.com and devzone.nordicsemi.com. And to thank you for listening, Nordic Semiconductor has a contest to give away some parts. Fill out the entrance form and you are in the running. That link is in the newsletter and in the show notes. <music>
(00:04:57):
You mentioned you were a software development engineer-
KS (00:05:05):
Yeah.
EW (00:05:06):
And you moved into testing. How did that come about?
KS (00:05:14):
It was back in India, which is where I grew up. I was working for Motorola, testing telecommunication devices, like huge boxes. Then all of a sudden, all the test engineers were laid off.
(00:05:27):
I stepped into doing system and integration testing of a feature for another developer. We started doing peer-to-peer testing. That is how I figured out that, "Wow, this is so fun! I get to see how this feature actually works end to end."
(00:05:47):
It just fascinated me when I found a bug, and the joy that the developer had, believe it or not. Maybe because I was a developer to developer testing. So that was a moment. It was probably the middle of the night, back in Bangalore in India. Yeah, that is how I switched. I think I wanted to stay curious, a bit. And then, rest is my career.
EW (00:06:19):
That joy of finding the bugs, does it persist?
KS (00:06:27):
Not when you are doing something repetitively.
EW (00:06:29):
<laugh>
KS (00:06:29):
I think that is why I started leaning on automation. If I am doing something three times, and I am not having fun- It is that constant pulse check, "Are you having fun today? What did you do today, interestingly? What did you break? What kind of a new bug did you find today?" So that is the joy.
EW (00:06:56):
And if you do not get that, you start looking towards automation.
KS (00:06:59):
Yes, absolutely. I think I did start manual testing, because that is how I figured out what is testing, first of all. Before I started solving problems with automation.
EW (00:07:14):
But when you start developing automation, you are back to developing lots of lines of software.
KS (00:07:19):
Yep.
EW (00:07:23):
But you stay a test engineer. How does that work?
KS (00:07:26):
Yes, that is a very interesting question and I love that question. Sometimes it is true, that automation ends up having more code, than the firmware code itself. Then you would get into the problem of how do we maintain this?
(00:07:40):
That is exactly why I follow this concept called "unified testing." What it means is that it uses most of the object-oriented principles such as, "Can you write a module that can be reused?" So you would just apply all the abstraction layers, design your abstraction layers, design your objects very well with relevant scalable concepts, and not duplicating the code all the time.
(00:08:12):
So if a test script has like thousand lines, then definitely throw it away and start again, is my motto.
CW (00:08:20):
Who tests the test scripts?
KS (00:08:23):
Oh, we do test the test scripts too. <laugh>
CW (00:08:26):
Where do you stop? <laugh>
KS (00:08:27):
So, okay. I think you can keep writing tests for tests too. But I think I usually find a balance. If I am adding a code- Like I said, when you are using inheritance abstraction layers based concepts, then when you are adding code to a layer that affects many different tests, different types of tests that is written inherited from that particular Python class, let us say, then you need to think about, "Okay, how do I protect all these thousands of tests that is sitting on top of this a class?"
(00:09:07):
Then that is when you add gate to even test PRs. So you got to pass this gate before you get into that class, that could affect literally all the projects' tests. Usually it is thousands of tests. So yeah, you need to create your configuration of what layers get tested, what layers do not need testing. For the tests, I mean.
CW (00:09:36):
Right. Right.
EW (00:09:38):
Okay, let us do a little bit of role playing. Let us say I am a curmudgeonly engineer who has reluctantly but diligently done manual testing for years. Say it is an inertial navigation product, so it is complex. And testing it requires a nuanced understanding of the system.
(00:09:59):
My boss went to an automation testing conference and wants me to add that to the list of things he can say about our product. How do I even begin? I know you talk about hardware-in-the-loop testing and unified testing, but where do you start with somebody who does not have- Where manual testing has been the right way for so long?
KS (00:10:24):
Well, I beg to disagree there. Can you explain me that word you used? I cannot even say the word. "Curm-" What is that?
EW (00:10:36):
Curmudgeonly. Cranky.
KS (00:10:37):
What is the meaning of that word?
EW (00:10:38):
Cranky. It means old and cranky and set in their ways.
CW (00:10:41):
Yeah.
KS (00:10:42):
Okay.
EW (00:10:42):
Christopher, you should explain "curmudgeonly."
CW (00:10:44):
That is mean.
EW (00:10:44):
<laugh>
KS (00:10:49):
<laugh> Okay. This is when I usually use the joke like, "Oh, English is not my native language."
EW (00:10:53):
No, no. And spelling it is even more challenging.
CW (00:10:57):
Yeah, it has got all the vowels.
KS (00:10:57):
<laugh> Yes. There is no way I am going to attempt to say that word. Maybe I will practice later.
EW (00:11:04):
You can use "cranky" instead.
KS (00:11:05):
<laugh>
CW (00:11:08):
Grumpy. Jaded.
EW (00:11:11):
Cynical.
CW (00:11:12):
Burnt out.
KS (00:11:14):
Okay. That is when I come in. So my approach is this. Anything that has been done the same way always, and it is just- It is almost like humans' ability to not accept changes. Or also like, "I like this familiar way of doing things, which is manual testing. Even though-"
EW (00:11:32):
It works.
KS (00:11:32):
"It takes about one hour to run one test, manually. And then I do that every day." It is a muscle memory or something.
(00:11:41):
So what I would say is my approach is first of all, I would understand the workflows and would pick the workflows that are high value, high impact. So basically that could be my sanity test. But I do create incremental way of adding automated tests. Like you said, Elecia, it is a lot of code. So even for automation, you need a proper design. You need a proper set of modules, shared modules. So that takes time.
(00:12:15):
So what I would do is first get my hands dirty. Create a strategy of how I am going to approach this. Is it really worth automating? Is the value too high to automate? So I would assess a few things.
(00:12:29):
Then I usually come up with an automation architecture, that would have incremental points of progression. So what that means is that you are giving an opportunity for the developers to catch up with you.
(00:12:41):
Like, all of a sudden you just add ten tests in front of their PR, that is going to gate them. They are going to be surprised like, "Why? I used to be able to merge my PRs all the time. And you are supposed to do manual testing, and then tell me a month later if the software is good or not."
EW (00:12:57):
<laugh>
CW (00:12:57):
<laugh>
KS (00:12:57):
So what I would like to do is pick those high value tests, automate them, show them the magic of catching the bugs as soon as they merge, and then get on their side. I am talking like a test engineer who is testing Elecia's code right now, by the way. I am not talking like a developer.
EW (00:13:18):
Sure, sure, sure.
KS (00:13:19):
I can expand more, but that will be where I start.
EW (00:13:23):
One of the things- Continue in my role as cranky engineer, or cranky team lead. I am having my team uses Git instead of renaming folders with version numbers. That is the level of automation and technology we have.
(00:13:40):
And so one of the first things that we would need to do, would be to set up a test server. Which seems like a lot of work to set up and maintain. Is it? Or does it just seem that way?
KS (00:13:58):
Well, I think since I have done it so many times, I think it has become a muscle memory for me, that it does not seem like an effort. But I can see why it is an effort, if you do not have someone that is sitting there and testing things and setting up all these infrastructure.
(00:14:13):
I think I would, again, start with small incremental steps. Like, can you add some validation checks for the environment, so that the developer actually trusts and finds the very high value failure modes, rather than not lots of noise, you need to focus on more. And on, "What is a signal to noise ratio? Is it even worth it? Or am I just all the time debugging failures and errors that is absolutely not adding value to me?"
(00:14:48):
So again, this is where I would focus on. "What are the high value failures that I need to catch early?" I would add validation checks.
(00:14:56):
And, I think you mentioned about, even for PR- I think I mentioned about PR testing. So here is where I would find, "How can I quickly break the system?" So I will come back to breaking systems early. What kind of things I can do, that I am allowed to do on this particular feature, as per the requirements, let us say, to find those issues?
(00:15:20):
Then I think again, having an architecture to build those test servers, that are easy to bring up. Rather than like, "Oh my God. It is such a complicated- It is going to take one month of five engineers." Then I guess, we are definitely going about in the wrong way.
(00:15:33):
I think having a solid foundation of an architecture, well thought design, that can scale and actually does not have any maintenance burden. Anyone would sign up for that, right?
EW (00:15:45):
Yes. Yes. So much yes <laugh>.
KS (00:15:48):
<laugh>
EW (00:15:48):
Okay. So, tactical. Where do we start? We are using Git. Does that mean Jenkins is the right answer? Or is that a, "It depends" sort of-
CW (00:16:03):
If you are using GitHub, GitHub has the Action stuff where you can run things on PRs. Right? I have never done it. <laugh>
KS (00:16:15):
Yeah, you can. Yes, absolutely. So I will tell you. Since we are using GitHub, so let us just plug in something, whatever it is you want to plug in. Let us say you want to take a GitHub PR, and you just want to comment saying, "Test this. I am telling you, test this." So you can just attach a comment to a PR, to something internally to trigger tests on either Jenkins- Whatever that you pick.
(00:16:41):
I do think that CI infrastructure tools like Jenkins, Buildkite and- Even I heard about Beetlebox or something like that. There is one more fun CI environment for HIL testing. Just connect anything.
(00:16:58):
But I think one thing that I recommend is, do not write a thousand line of CI code, and then you do not know when things are breaking. This is where the logic comes in, where when you have a PR, configure it so that, "What files do I absolutely need to test for, to catch the relevant failures?" and, "What kind of PRs must definitely go through a lot more heavier testing?" So you need to find what are those meaningful failures and group them.
(00:17:31):
Then you also need to think about how long to test for. Let us say if you are testing, if you are pushing a PR in GitHub, and you have a simple mechanism to trigger testing on one of the hardware-in-the-loop benches, then you need to know that the bench is available.
(00:17:50):
You cannot just trigger a test like, "Oh, I do not know when the bench is available or not. I am just going to go through the usual CI commands, and kickstart a test." Then one hour later, it says, "By the way, bench is busy."
(00:18:05):
This is what I would come back to it, right, which is sit together with your developers. Think about what type of failures you want to catch for those PRs. Think about the time. Like, how long do you think it is- I would ask you, Elecia, "If you submit a PR, how quickly do you want the feedback for your PRs? Timewise."
EW (00:18:25):
I think that is really an important question, because if it takes more than an hour, I get pretty frustrated. Because I forget what I have done. I am just so noodle-headed sometimes. But I remember when Chris worked at Cisco-
CW (00:18:36):
<laugh>
EW (00:18:38):
You had-
CW (00:18:39):
24 hours.
EW (00:18:40):
24 hours. By then the tree had all changed, and you had to do another 24 hours.
CW (00:18:45):
I never committed anything.
EW (00:18:47):
It was just so awful.
CW (00:18:48):
And then I quit. I think I had a commit pending for eight months and then I gave up. I am exaggerating. But the static- These were not hardware-in-the-loop. I do want to remind people, we are talking about a little bit of a narrow version of testing, which is hardware-in-the-loop, which adds additional complexity.
EW (00:19:04):
I have not started that yet.
CW (00:19:05):
But she has mentioned it.
EW (00:19:06):
Yes, that is true.
CW (00:19:07):
No, I have done- Let us talk about testing. Seriously. I think at the end of the day, hardware-in-the-loop is just one more layer we added.
(00:19:15):
Yeah. But it is a constraint, right?
KS (00:19:17):
We are talking about testing in general. I can even remove the hardware-in-the-loop right now.
CW (00:19:20):
Yeah.
EW (00:19:22):
So it is about figuring out what you want to test.
CW (00:19:26):
And how much pain you are willing to put up with <laugh>.
KS (00:19:28):
Oh. See, that is the thing. I think that is where the focus is for me. As a test engineer, I have done a great job, if I do not come across as pain by adding the test.
CW (00:19:41):
Right, right. Yeah.
KS (00:19:41):
I am actually like, "Oh, my God. I cannot wait for results to come back. I know it will find real bugs for me." I will not be looking at debugging failures, of how Jenkins went down in the middle of the testing, right?
CW (00:19:54):
<laugh>
EW (00:19:55):
Yes. I hate that! I had this big test suite. It was simulation. For some reason, it would run perfectly on my computer. I would push it to Jenkins. Jenkins would barf all over it. It took us a week to figure out that there was some driver that was incompatible on the Jenkins system. I am like, "That is not what I want to spend my time debugging."
CW (00:20:18):
Did you try to run a Windows app as part of Jenkins, or something?
EW (00:20:21):
It was running on a Window server.
CW (00:20:22):
<laugh> You were on hard mode?
EW (00:20:27):
I stand. Yes, but I did not know I was on hard mode.
CW (00:20:31):
Yeah. Yeah.
EW (00:20:33):
It is not fair to say you are doing something difficult, when all you are doing is flailing about, hoping you get it right. Sorry. Lately my work has felt a lot like I am [an] animal at the drums, just randomly flailing about. Sometimes good stuff comes out, and sometimes I hit myself in the head with the drumstick.
KS (00:20:54):
So these are all great, because that is why I have a job.
EW (00:20:58):
<laugh>
KS (00:20:58):
<laugh> This is why I keep going back to create your foundation. How you want this thing to work? What are the kind of KPIs you want to hit? The KPI could be how many times do you annoy your developer by failing a test? But that the test failure, whether it is a real failure in the firmware, or it is a real failure in the CI, or it is a real failure in the test. Is the test is broken? Is a board broken.
(00:21:25):
So if you are doing a guesswork, then that means that is really bad. This is why I would say the thing that you talked about, Elecia, is that with Jenkins- "It runs on my system fine. Why is it not running the same way in CI?" This is the first problem I always solve, when I go into PR testing using continuous integration.
(00:21:49):
I would first of all figure out what are all the dependencies that we need to have, for system under test? Forget about the hardware-in-the-loop for a second, right. What is this software you are testing? What are the kind of dependencies you have?
(00:22:00):
What are the kind of configurations it needs to be to say, "Let us go." Like, when you do a go button for, "Let us go test," this is a clean space. I would typically have a Docker environment. I would deploy everything in that. Which means you run the same way in Jenkins, and then you run the same way in your local computer.
(00:22:19):
That is like containerizing all the dependencies and configurations you need to test your software, and then you build on top of that. Again, automation has many layers. I do not want to get into that. But that is where the problem is.
(00:22:37):
A lot of us just do whatever we want on a computer, and then install everything and then test. Then when the same test runs in a clean environment like CI, it is a common frustration developers have, believe it or not.
EW (00:22:53):
Yeah, no. Although as I was watching my continuous integration do its thing, and it was reinstalling things it had reinstalled many times before, it did occur to me that as we have more processing power-
(00:23:12):
Sorry, as total tangent. As we have more processing power, we make computers do the same thing over and over again more often. When the computers rise up, when the robots attack, they are going to be, "This is for all the years of boredom."
CW (00:23:28):
We do not have to worry about that, because we cannot make computers anymore.
EW (00:23:31):
True. Okay. Sorry. Rant to over. Let us talk about hardware-in-the-loop testing, because-
KS (00:23:36):
Mm-hmm.
EW (00:23:37):
I think the whole, "How do you set it up?" is just a hard problem, and you have to accept it is a hard problem.
CW (00:23:43):
It is a total mystery to me. I always feel like I have experienced hardware-in-the-loop testing a little bit. But generally- Well- At Cisco we ran the software on the hardware, and so all testing was on the hardware. But they were big computer-like routers.
EW (00:23:58):
They could do debugging on the hardware.
CW (00:23:59):
Yeah. But for small embedded devices, it has always been like, "How? What if I have a screen? How do I interact-" There are all these little, "It is a device! How do you do this?"
EW (00:24:08):
I had one device where we piped in the signal that we were making it look for. Then we would listen on its output, let us call it "serial port, output port," to make sure we got the right signal characteristics.
CW (00:24:25):
So, sort of a black box, just as a pin out?
EW (00:24:27):
Yeah, as a black box.
CW (00:24:28):
Okay.
EW (00:24:28):
Does that count as hardware-in-the-loop testing?
KS (00:24:31):
Hmm? Yes. I think it is interesting that you are sharing some of the Cisco experience. The first time I started doing hardware-in-the-loop testing, I did not know what was "hardware-in-the-loop testing." That was my first day at embedded software testing. So I understand how people feel about hardware-in-the-loop testing. I was one of them.
EW (00:24:58):
Could you define it for us better?
KS (00:25:00):
Yes. This is a fun way of- This is my favorite part, which is how do I make it fun for someone who has never done hardware-in-the-loop testing? So I have done this with my friends, I have done this with my parents.
(00:25:14):
I always tell them this joke, which is like, "Imagine- You know that I test self-driving cars, right? I cannot park a car at my desk to test self-driving cars. It is sitting somewhere else. And that is when the hardware-in-the-loop benches come in. To do that testing." That is the simplest way I can explain to my friends and family. <laugh>
EW (00:25:37):
That makes a lot of sense. I have had some seriously weird things on my desk. But once we get to the testing phase, I usually want them off my desk.
CW (00:25:47):
I have had 50 watt lasers on my desk. Which I tested on my desk. <laugh>
EW (00:25:52):
Yeah. And then at some point you want them to be over there, to be tested differently.
CW (00:25:55):
Yeah. So you do not burn holes in your monitor.
KS (00:25:56):
<laugh>
EW (00:25:57):
Or the wall. Or yourself. All the things.
CW (00:26:01):
But yeah. There are definitely different kinds of hardware lend themselves to- Like, I have been working on a UAV, and doing a lot of testing on the ground without flying it, of various components and stuff. Because how do I do hardware-in-the-loop testing of something that has to fly? And I think probably <laugh> autonomous vehicles are a similar situation. Right?
EW (00:26:22):
Well wait. I did a whole autonomous vehicle project, and we just used simulators the whole time.
KS (00:26:26):
Oh.
CW (00:26:28):
But I do not- Right. Well that is- Okay. Well, I think we should put a pin in that and come back to that.
EW (00:26:32):
Okay. We will come back to simulators.
CW (00:26:33):
Yeah.
KS (00:26:36):
So hardware-in-the-loop testing is all about simulating some parts of it. But then the hardware- The hardware under test, which I sometimes call "a device under test." Most of the time I call it "DUT."
(00:26:48):
So that is where you are going to deploy your firmware. Then you are going to interact with those interfaces that the device has, from your test conducting computer, where you could be running your tests to orchestrate a real world scenario.
(00:27:06):
For example, imagine I have to connect over Bluetooth and then play music, and then record the music and see if the music quality is good. That is end-to-end automated testing using hardware-in-the-loop. And you have to do it with an NFC Touch.
(00:27:23):
So what I did in the lab for example, is that I would have the device under test with the NFC tag extended, and then I would attach the NFC Touch thing on top of it, and then tied it with the tape, believe it or not.
(00:27:40):
Then I would initiate the test from the test conducting computer, which will trigger a Bluetooth connection, initiating the signals on the NFC tag. Which will internally send signals to the Bluetooth chip saying, "Okay. I got a tap. Go and attach this to pair this device."
(00:28:00):
And then you would have on the other side, you would connect the audio input output to your computer, so that you can play that once it is connected. Then you play the audio from your computer through the device, because it has an audio chip.
(00:28:15):
And then on the other end, it will loop back and come back to your computer. Then you record it and say, "Okay. I sent a sine wave. I got a sine wave back. Yay, the test passed." So that is a simple- That is a fun test that I have automated using hardware-in-the-loop benches. And then supporting devices like NFC tags, power supply, JTAG, whatever it is that you need to get your job done.
EW (00:28:41):
When you say "benches," you mean one of these systems is a bench? The whole-
KS (00:28:46):
Yes. Oh. That is our own jargon.
EW (00:28:52):
So when you say you have ten benches testing cars, you have ten car computers, not ten actual cars?
KS (00:29:03):
No, it could be the computer, it could be a sensor, it could be the cleaner for the sensor. It could be anything. But anything that is, "I am focusing on this." The center of that test bench is a DUT.
(00:29:17):
If I have something else connected to it, and then connect it to a computer to orchestrate a situation, like a scenario use case, that is a hardware-in-the-loop test bench. That bench has one DUT. Or, it has multiple DUTs. You can control all of them from your test conducting computer.
(00:29:35):
It is test conducting computer, plus device under test, plus any other accessories you need. Like power supply, or some kind of an attenuator for example, or a dSPACE.
EW (00:29:48):
You mentioned the NFC tag to send data into the device under test. Do you ever use just analog signals?
KS (00:30:03):
Mmm. Yes. But I think we use it when we are- It was like ten years ago, back in the UK. I used to work for Cambridge Silicon Radio. So. Wait. I do not think I particularly have used analog signals. Sorry, I am a little rusty.
EW (00:30:24):
No, that is fine. I just wondered if you had tools you used for that, but if you are not using that, then the answer is, "No."
CW (00:30:31):
Oh, like-
EW (00:30:32):
Like the Analog Discovery.
CW (00:30:33):
Signal generators and things like that? Yeah.
EW (00:30:34):
Yeah.
KS (00:30:34):
I have used them. Definitely. I have used them. I am just trying to recollect a use case, where I could just- Just like how I talked about how I played music, we end-to-end.
CW (00:30:45):
Right, right.
EW (00:30:45):
Right.
CW (00:30:45):
Yeah.
EW (00:30:45):
Right, right.
KS (00:30:45):
When I was testing audio products.
CW (00:30:46):
That definitely counts.
EW (00:30:47):
Well, I was thinking like-
CW (00:30:49):
An ADC.
EW (00:30:49):
We have seen some heart monitors.
CW (00:30:51):
Right.
EW (00:30:51):
Those are very good for you shove in an analog signal, and you expect to get the Bluetooth out.
CW (00:31:00):
I did that. I plugged one of my synthesizers into the front end of the EKG.
EW (00:31:06):
Yes! But that was a one-off engineering test.
CW (00:31:08):
Yeah, that was me just testing at my desk.
EW (00:31:10):
If you wanted to build a test that continued to test, you would need-
CW (00:31:14):
A signal generator.
EW (00:31:14):
A signal generator. And it would be better if that signal generator was controlled via Python, probably?
KS (00:31:21):
Absolutely. I have done this, but it is that I cannot recollect.
CW (00:31:25):
Oh, everybody has- Do so many things you forget. There are a lot of devices that require some sort of human interaction. You mentioned the NFC.
KS (00:31:36):
Mm-hmm.
CW (00:31:36):
I have seen places where, in a device that has a UI or something, where people have made little robots-
EW (00:31:46):
Do you remember at Fitbit, where they made little finger button and it just went down and pushed the button 9,000 times. <laugh>
CW (00:31:52):
I thought there was somebody doing that also, when we were at Maker Faire once. They had a whole framework for a button. It was a little robot button. You could script that to push buttons on a screen-based UI or something like that, or physical buttons on the side.
(00:32:05):
Have you done that? Are there other ways to do it? I guess I have seen things where you can- I think at Fitbit we did this too. Where you bypass the physical interface, and you could inject-
EW (00:32:18):
The signals.
CW (00:32:18):
UI commands to say, "Okay. Pretend a button was pressed." But then you have got code running on the device. It is not simulator code exactly, but it is like-
EW (00:32:29):
Short circuiting.
CW (00:32:30):
It is like little scriptable things, that you can access from testing, as if you had pressed a button.
EW (00:32:36):
Command line! I love the command line!
CW (00:32:38):
It feels like there is a continuum between leaving the device as it is, but then also leaving hooks in for testing.
KS (00:32:45):
Oh man. You have asked one of the most interesting question, and my brain is all over the place right now. I am thinking, "Should I give this answer? Or this answer? This answer?" I think that is very interesting question.
(00:32:57):
So I have literally tested button sensors. I went through the stages of pre-silicon, to tape-out, to shipping the product to the customers. I had the most amount of fun testing that system-on-the-chip product. And it is to do with the buttons. I did try different types of button pressing attenuators. Also, you would have different types of thickness for those buttons.
(00:33:30):
So you need to- You would have a layer called "application layer," where I could literally, like you just said, load my test into that space, and actually run the test as if my test is the application that I am building on top of the chip. So you hit that. I have literally done that.
(00:33:50):
When we do it again the same, I felt like the concept is still the same. You just treat that button pressing- Let us just call it "robot hand" just for fun's sake, which I did not use. It is very inaccurate to use those things, there are much powerful instruments out there. They might be expensive, but it all depends on where you want to put your money. But at the end of the day, you can also simulate those things as well.
(00:34:18):
So yeah. I think that is the concept I want to talk about a little bit. When you are designing your software, think about testing as an application that you would load. Which means you are thinking about testing from the time you are adding a line of code.
(00:34:33):
"How am I going to test this thing?" is something that you should think about. That would save you so much time, so much money, so much frustration that we are talking about. So that is a classic example of well-designed firmware and application software, where testing is part of the software development.
(00:34:53):
If you want, I can continue with the next interesting thing that came to my head.
EW (00:35:00):
Mm-hmm, mm-hmm.
CW (00:35:00):
Yeah, yeah.
KS (00:35:03):
This is one of the things I am passionate about, which is if I am asked to do some button pressing tasks, like the way you just mentioned Chris- I have a GUI-based application that I have to test that for all my- Let us say you have a tools configuration software.
(00:35:27):
If my job is to press the buttons and make sure all the widgets and gadgets and everything that is on the screen, combo box, dropdown, whatever it is that you have added as a developer, my job is to test and ship the product. And to test that on Mac, Windows, Linux, all kinds of OS. I do not know what customers are going to be deploying the tool configuration tool on. Right?
(00:35:50):
Then that is when I think about how do I automate myself out of this job? How do I sit with the developer and see what is in the backend of these UIs? Let me think about how they are generating the data. Is that a jargon of text file? Or is a JSON file?
(00:36:04):
If it is a JSON file, how do I- If I click this button, this particular register has to be written to or it has to be read. Then can I just apply some prefixes postfixes that will tell me to predict, "What is this button going to do?" Then what I did is that I automated myself out of that job, by auto-generating the test as they change the UI.
(00:36:31):
But before that, I had to press the buttons, and I had to be so uncomfortable, I had to hate my job, to get to a- It has to be something. What if the developers do not have to rely on me, and their tests are auto-generated as they change? They want to, "Okay. Today I feel like having a combo box in this UI." "No, I would like to have a radio button here." They can change their mind too.
(00:36:54):
I cannot be pissed off about that their mind is changing every day, and then I have to rewrite my tests again. That means we are doing something wrong. So that is an interesting one.
(00:37:05):
Believe it or not, I could get in trouble for this one, but I am going to say it. I replaced about 160 people, including myself, by auto-generating tests. That is a very interesting, proud story of mine, although it is- I just thought that I started doing something more interesting, because I did that, after that.
EW (00:37:28):
You mentioned that you got into this because the test team had been laid off. And now you are talking about auto-generating tests and writing yourself out of a job. Is tests engineering a more difficult job to stay in?
KS (00:37:48):
I like the twist you just added. You are just like, ""You just said this. But you also did this," so thank you for holding me on it there. Sorry, what was the last sentence?
EW (00:38:01):
Is it a job that does not have much stability?
KS (00:38:05):
Oh, I am a living proof. It has really high stability, right here. I can give my take on it, and then listeners can take what they want to take from it. In my opinion, I stayed curious. That is why I got into testing. I enjoyed it and I just thought, "You know what? I think I am good at this. I am just going to step into this. Stay curious."
(00:38:29):
And then I also know that I had to evolve as the industry, as the technology, evolves. So I was testing telecommunication products in the beginning, like big boxes in the network somewhere. And then I started testing financial models for insurance companies, to calculate risks and claim.
(00:38:49):
So I completely went to the other side, where I was testing UIs and web apps and desktop applications. What I learned there is I stayed curious on how the user experiences. How should I find issues, before the users of those applications find? I stayed curious there. But I also automated a bunch of stuff. It was easier automation.
(00:39:12):
Then finally stepped into embedded software. I was so fascinated. I still remember the time I asked my boss, then boss, Gordon, "What is a GPIO?" So I can never forget. It is staying curious. It is not worrying about what others think.
(00:39:30):
It is about, "I think something is there, and it is keeping me on my toes. I need to figure out how I can be the best at this. And how do I apply my strengths to be the best at this?"
(00:39:42):
And then also obviously, when you are automating, when you are testing, when you are finding issues, you are going to get on people's nerves. <laugh> Your job is to find issue. But then the timing matters, and you would end up frustrating some developers. You need to be staying resilient. That is another thing.
EW (00:40:03):
Bugs are gifts. Getting a bug before the customer sees it, it is a gift. But sometimes you do not want the gift, right before you are going to ship.<laugh>
KS (00:40:12):
<laugh>
CW (00:40:14):
I have to say, I have had, unfortunately just a few, instances of working with really, really good developers, but they were really, really good. In one case I am thinking of, he sat with us- Not developers. Testers, excuse me. Development testers.
(00:40:28):
One case, he sat with us in our area. So we were the developers, he was the tester and we were basically seated in the same place, and we interacted constantly.
EW (00:40:39):
Large cubicle.
CW (00:40:41):
Yeah, yeah. Psychologically, he was very different from us. There is a thing with developers, where even if we are trying to break our stuff, we cannot. Because-
EW (00:40:52):
You think about how it should work.
CW (00:40:53):
There is something- We think about how it should work. And so there is this blind spot of, "Well, I am not going to try- I am not going to know to try certain things." And he would just- There would be these sequences where it was like, "Well, I want to do this, this, this and this. This happens." I am like, "Why are you even doing that?" And then-
EW (00:41:13):
Because you let me.
CW (00:41:13):
A week later he would demonstrate that, "Well, that is a very common thing that happens." It is like, "But. Oh." He had this attitude of- Maybe it was an age thing. He was an older gentleman. But he had this attitude where if he found a bug, you could not be mad at him, because you were the one at fault. <laugh>
EW (00:41:37):
Only mad at yourself. Which is true.
CW (00:41:39):
But it was like, "Well, I did this, and your stuff did this." And instead of like, "Well, why did you find that?" it was like, "Well, I am very ashamed." But there were certain relationships with testers where it really just worked, and that was one of them.
(00:41:50):
But I think there is a different mindset, and I think you have touched on it. A different mindset from a good tester, to a good developer. Because a developer, we are biased to we want things to work. And a tester, while they want things to work, they want things to work after they find everything that does not work.
(00:42:08):
I do not know if it is just, like you mentioned, curiosity. There is a different form of curiosity, or what have you. But I have noticed that good testers are few and far between. I do not know if we need to train people differently to become good testers, or if there is just something temperamental.
EW (00:42:23):
Or if we need to stop telling them it is a dead end job.
CW (00:42:26):
<laugh> Definitely.
EW (00:42:26):
Because it is not.
CW (00:42:26):
Definitely.
KS (00:42:27):
It is definitely not. I do have to say Elecia, with the AI doing the testing tasks, so that could definitely be making you feel a little bit intimidated. Like, "Oh, my God. I could be replaced by AI." Like I just talked about it, "Can I do my job so well, that I could replace myself with automation, and then I can go and do something fun?"
(00:42:50):
I would apply the same concept to AI testing. How do I leverage AI to do better testing? How do I make it do fun, but very corner cases? It still finds the bug. How can I be taking that for granted, instead of thinking that AI is limiting my options now? So that is another way. That is another thing I always think, "How can I be ahead of AI?" is my motive right now.
EW (00:43:21):
I think that is really important, and I think that is a whole 'nother conversation.
CW (00:43:23):
<laugh>
KS (00:43:23):
<laugh>
EW (00:43:23):
If we start now, we will be here for at least another hour. Because I think that is a fear across the industry. It is not just testing.
KS (00:43:33):
Yeah, it is the same concept about testing. I think I always applied in the past, which is I moved from domain to domain to domain. I remember one of my close friends said, "Komathi, you are never satisfied." It is not about satisfaction. It is about it is not challenging enough for me. So how do I solve more complicated problems next?
(00:43:52):
Then I went from simple semiconductor chips for Bluetooth, to all the way to chips for running a whole car. That is how complicated the embedded software testing got for me. But it is all in your hands and your mindset, like you said, Elecia.
EW (00:44:09):
You have gotten work on some really interesting applications. That has always been what I liked about embedded systems, was that the devices go off and do things, and I get to be part of their lives. But you have mentioned, I think, financial software and cars and buttons. You get to see a lot more, if you are curious beyond what needs to get done today.
KS (00:44:37):
Yeah, absolutely. I feel like my career would have been absolutely different, if I had just stayed on telecommunication. Because look, we have 5G, LTE. What else can we do with that? We are thriving there.
EW (00:44:50):
You are giving a talk at the Embedded Online Conference, which Jacob asked me to say a few words about. Jacob being one of the organizers.
(00:44:59):
It is taking place the 12th through 16th of May. Lots of engineers bringing their experience, with practical presentations focused on embedded systems development. Philip Koopman, James Grenning, Jacob Beningo, all been on the show. You can get $50 off your registration, with the promo code EMBEDDEDFM at checkout.
(00:45:25):
Okay. Sorry. You are giving a talk, and it is about, I do not know, testing? Maybe your hardware-in-the-loop?
KS (00:45:37):
Yeah, it is like arranging the ducks in a row.
EW (00:45:39):
<laugh> I love ducks. The ducks on your slide are so cool.
KS (00:45:42):
<laugh> Automated hardware-in-the-loop test process, in other words.
EW (00:45:47):
I am sorry, could you say it again? I was nerding out about the ducks.
KS (00:45:53):
<laugh> I said, I am talking about arranging the ducks in a row. In other words, automated hardware-in-the-loop test processes.
EW (00:46:02):
What do you hope people will get out of your talk?
KS (00:46:06):
They would know what "hardware-in-the-loop testing" is.
EW (00:46:09):
We have already covered that. I mean people here. No, sorry.
CW (00:46:13):
<laugh>
KS (00:46:17):
<laugh> And then they would know how to incrementally add continuous integration based automated test processes, even with hardware-in-the-loop. When you would think that, "Hey, if I add a hardware-in-the-loop based test gate to my PR testing, I am doomed. Because we have one bench and we are sharing among ten developers, and that bench could be broken." So that is a fear everyone has.
(00:46:42):
This would give a little bit of an idea of how you are going to approach that real challenge, because that happens in a lot of startup companies. You have- Like you said, "I have three DUTs connected and it is sitting on my desk. I am in the early stage of development. How do I bring it up quickly? And I would still want to develop faster. I want to break things, but without breaking my whole day."
EW (00:47:07):
I have never let the smoke out of the chip with my software. How often do you have hardware-in-the-loop that do break? I mean, software really is not supposed to break the hardware, or I think the hardware qualifies as weak and insufficient.
CW (00:47:28):
Do you have fire extinguishers on all those benches?
KS (00:47:32):
We do. Literally. I remember there was a time that we were- I got to a point where- One of my engineers recently, I caught up with him back in the Bay Area, and he said, "Hey, we used to- We had 400 plus benches connected remotely. Then people were able to access it and run their tests manually or CI. That was awesome!"
(00:47:55):
But what happened is that in that building, when we had those many benches, we were sucking so much power. We used to have power cuts. <laugh> So it does happen. You just have to be prepared for everything with respect to testing, I guess. Just to say it lightly.
EW (00:48:17):
Okay. But assuming you are not actually frying boards, or trashing motor drivers, how do the engineers tend to break the benches?
KS (00:48:28):
Yeah, so let us say you are a company that is developing audio chips. Obviously that is your specialty and you are great at it. You obviously think about how do I create a platform-based library, so that I can just use the platform, common platform, across all my chips. Most of the common breaking problems happen there.
(00:48:56):
Which is, if your platform is broken, whatever the firmware you build on top of it, you are going to break the bench. "Brick" is another word we use in the hardware testing. That is another jargon we use here. What it means is that you push a code that has the ability to break many other firmware that you could build on top of it.
(00:49:20):
And that means you are trying to do something very cool, but you tested it on maybe one device in a test. Maybe let us call it A, but you did not test it on B and C, so you do not know. Because you do not have those resources, you could only test it on the A.
(00:49:39):
And then hope, pray to God, and then push the code, and then you end up breaking- That code would break for B and C. So that means you have got a bug at the platform layer that it has the potential to break most of the benches. If maybe just one bench might be safe.
EW (00:49:57):
Is this where B and C might have minor hardware differences or major hardware differences, that should be accounted for? Or is this where B and C should be the same hardware?
KS (00:50:08):
They could be different types of hardware, but it is supposed to use the same platform code, OS platform code.
EW (00:50:20):
Okay. I just wondered how many devices I needed to run the same software on, to prove that the same hardware should work, which those two together should be okay. And if they are not, that is a different problem.
CW (00:50:33):
Yeah. That is another complication, right? If you are in a development environment where there are new hardware revisions, you have got to rev the benches pretty often too, right?
KS (00:50:41):
Absolutely. That is exactly the problem. So my job is to keep the relevant environments up to date.
CW (00:50:47):
Okay.
KS (00:50:50):
Sometimes it changes so fast, considering that we work mostly remotely. You need to be very fast at planning for all those things, and then making sure that those things are available. But then sometimes the software can go faster than you think, development, so you might not be prepared for that.
(00:51:07):
But I do have another way of breaking- I think I would say, why would you break software when you are not supposed to, when you push the code. Let us say, let us remove the platform OS software could break many other firmware that you build on top of it.
(00:51:20):
Let us talk about situations where you said Elecia, like, "Oh, it works on my setup. How did it break in the CI?" So that is the next problem. It is just that maybe your device might be in a good state, that when you loaded the new software that you are trying to develop, when you do a firmware upgrade, it all works fine.
(00:51:40):
But then when you go to a CI environment, where it assumes from the scratch, deploy this chip and bring it up. That could be a place where it could break. So that is a very common problem I find, with respect to firmware testing, where developers are so confident, "I definitely know that it cannot be. It is not possible."
EW (00:52:00):
"It works for me." Yeah. Oh yeah.
CW (00:52:03):
"That cannot happen. That is impossible." I remember standing over the tester I was <laugh> just mentioning. Yeah.
KS (00:52:13):
Because I do not have the bias. You see that. I think someone else mentioned, I do not have the bias that I- I think, you told me to test within the square. I am still testing within the square. Unless you said you change your requirement to test within the circle.
CW (00:52:29):
Right. I think as developers we tend to think about what we intended to do with the code, and think that is what we actually did do. So that when something goes wrong, its intention with what we intended to do, but we are confusing that with what we actually did. My intent was to catch this failure, which I did not. <laugh>
KS (00:52:54):
Yeah, I understand you. I have sympathy and empathy. But this reminds me right, there is a good- I want to add something positive here. This is why I often design a special setup called "brick test bed." So basically it is a bench you are allowed to break, because I have every single possible way of recording that bench remotely, using automation.
(00:53:22):
So that is a special bench, which has special particular accessories, so that if you want to run your very questionable tests with your innovative software...
EW (00:53:35):
<laugh> She is so polite.
KS (00:53:36):
Because I think if a test engineer accepts the fact that developers are going to innovate, then I want to give them a playground too. And that is brick test bed bench, where it is a playground for developers to think that, "You know what? I am allowed to break this, because I am innovating something."
EW (00:53:54):
I like how she says "innovating." But she seems to mean "messing it all up." <laugh>
KS (00:53:58):
Come on.
CW (00:53:58):
Flailing. Do you have a little hammer on a solenoid? So you can smack the board for when you need to percussive-
KS (00:54:05):
Hey. Hey.
EW (00:54:06):
<laugh> Percussive maintenance.
KS (00:54:07):
Well, my benches are going to be locked inside a room you cannot enter, so you cannot do those things.
CW (00:54:13):
So you definitely got to have a hammer.
KS (00:54:14):
Violence is not allowed.
CW (00:54:14):
Oh. <laugh>
KS (00:54:14):
I come from <laugh> the world of Gandhis.
CW (00:54:20):
<laugh>
EW (00:54:24):
I mentioned simulation earlier. Do you think hardware-in-the-loop is really that much better than simulation?
KS (00:54:33):
I think there is no question about what is better, what is not, what is bad. It is a question of when is the right time to do what type of testing. I think something I use in general in my design, is that use the right tools to do the right job. It is just like that.
(00:54:52):
For me, simulation is a tool set, and hardware-in-the-loop is also another tool set for me. I would apply hardware-in-the-loop test for wherever you need real-time interaction with a physical hardware. But physical.
(00:55:07):
Physically, you need that physics and electrical interactions, and how your software, which is the embedded software firmware, reacts with those things. Where I would like two devices to talk on their physical interfaces, and make sure their interfaces are working. They are able to be integrated into a system and then all the way to a fully integrated whole car, or a whole camera device. Right?
(00:55:35):
So it all depends on when you use hardware-in-the-loop testing. I would use in the earlier stage of the product- Let us say you just got your hardware, you are just developing this firmware. You want to make sure all these interfaces work, so that it actually captures an image and then sends it over to you for processing later.
(00:55:54):
Before you apply simulation saying, "Okay. My car is running." And then you have all these cameras and sensors capturing all these signals, and now am I taking a left or going straight or pressing a brake.
(00:56:06):
So it all depends on what is the- You need to decide what to use simulation for, what to use hardware-in-the-loop testing for. In fact, sometimes unit testing catches most of the problems.
CW (00:56:17):
Right.
EW (00:56:20):
Is unit testing different than what you are talking about?
KS (00:56:26):
Yes.
EW (00:56:26):
Okay. How?
KS (00:56:29):
Unit testing could be like, okay, you are adding a new API to your module. It could be a simple file. And you just want to run a bunch of tests, sending different inputs to your API attributes. Then you just want to figure out how well it works, how gracefully it fails when it fails, how well it throws errors so that it can debug.
(00:56:59):
Unit testing is just breaking a simple space, like one- Or sometimes you could be writing unit tests for API to API interaction. Or you could always be writing unit tests, where you literally do not need anything else. You can just run those tests on your local desktop without needing anything. So just step by step as you are developing, you are testing your APIs. That could also be unit test.
EW (00:57:26):
And hardware-in-the-loop comes in, when you are doing more system integration testing.
KS (00:57:31):
Yeah. I would say this is how I arrange my ducks in a row with respective testing. Which is do your static analysis. Just have your build system built with static analysis. It finds so many bugs.
(00:57:46):
And then finish all your unit testing. If you do not catch any bugs there, then you use your expensive testing, that would need more resources.
CW (00:57:55):
And that is a way to think about it, right? There are layers of testing. There is, at the top of the expense layer, is, "Okay. I have got real hardware with a bench." With lots of things, and there are limited resources. It is expensive in time. It is expensive in parts.
(00:58:08):
And then moving down to the least expensive testing, which is-
EW (00:58:13):
Compile without warnings.
CW (00:58:14):
The developer. Yeah. The developer compiling it and running for a few seconds or something. But I think you were talking about the difference between, well, simulation and hardware-in-the-loop testing, and there is right tools for the right job.
(00:58:29):
I think that goes to expense too. Where do you get the most bang for the buck, and where should you test certain kinds of things? You should not be testing an API change on the hardware-in-the-loop thing first, right?
KS (00:58:44):
Yeah. Let me say an example, right? If you are testing parking features of a car using hardware-in-the-loop testing, you are definitely doing it wrong. That is where you should use simulation.
CW (00:58:55):
Okay.
EW (00:58:56):
Why?
KS (00:58:56):
<laugh>
EW (00:58:58):
Because it is a system?
CW (00:58:59):
Because you are going to crash the car.
EW (00:59:03):
Well. Better I crash it in the test bay.
CW (00:59:05):
Simulator.
EW (00:59:07):
Oh, true. Okay.
KS (00:59:07):
Yes, yes. You can crash. You can create some fantastic safety related tests using simulation. Use them.
CW (00:59:16):
Think about the project you have been working on, and how often you flung the poor hapless simulated human into another dimension.
KS (00:59:24):
Well, that explains it.
EW (00:59:28):
But that is like they really enjoy it. <laugh> The little simulated guy, he just falls off and he tumbles over and-
CW (00:59:35):
Then the physics engine breaks, and he starts spinning at 7,000 RPMs.
EW (00:59:37):
Yeah. That is the best part.
KS (00:59:39):
And imagine simulating a plastic bag flying, and you have to test it.
CW (00:59:44):
Yeah. Yeah.
KS (00:59:45):
Simulation. Not real hardware-in-the-loop testing.
EW (00:59:49):
It depends on what you are testing.
CW (00:59:50):
Yeah, yeah.
KS (00:59:50):
Exactly.
EW (00:59:51):
And it goes back to where you need to spend your money and time.
CW (00:59:56):
And if are you likely to actually damage something? <laugh>
EW (00:59:59):
The vehicle that I did, that was all simulation, until we got to the actual vehicle, it had very constrained interfaces and a well-written simulation before we got there.
CW (01:00:13):
And a human operator who could stop it.
EW (01:00:16):
<laugh> Yes, but he did not. And that is what is good about a vehicle test.
CW (01:00:20):
<laugh>
KS (01:00:22):
I do have to say, throughout my career of embedded software testing, I have only been using simulation testing in this industry, of testing autonomous vehicles. But in the past, I did not have to use simulation at all. Most of the bugs were caught by unit testing and simple API to API integration testing, to be honest. Before hardware-in-the-loop testing.
EW (01:00:42):
Robots.
CW (01:00:43):
Yeah. I think simulation really does come in when it is something large and expensive and interacting and- Yeah.
EW (01:00:49):
And when the hardware itself is so expensive, that you cannot-
CW (01:00:53):
Give one to every developer.
EW (01:00:53):
Do hardware-in-the-loop bench testing, until the hardware is done.
CW (01:00:57):
Yeah.
KS (01:00:57):
Right.
EW (01:00:57):
You have to use simulation, because it just does not exist yet.
CW (01:01:01):
But for a Fitbit, we tried to do some simulation stuff for Fitbit, and-
EW (01:01:04):
And it was not worth it.
CW (01:01:05):
It was a lot of work, and it ended up just not being all that useful. It became more useful when we had apps, but that was less of a simulated environment and more of an emulated environment.
(01:01:13):
But it was like, "Oh, we were going to simulate the whole thing. You could run it in QEMU and load the same firmware and stuff." And it was like, "Yeah. But I have five on my desk and they all work." So.
EW (01:01:24):
Right.
KS (01:01:25):
Emulation is another space. I am glad you mentioned [QEMU]. One problem with emulation is that you need to maintain that. You are writing a lot of code to maintain that. So that it can use emulation instead of hardware-in-the-loop sometimes. Like, "Let us just run these ten tests on hardware-in-the-loop. We have one bench, right? And you have automation, right? Let us just go for it."
EW (01:01:45):
Yeah. And simulation can be a huge amount of software. All the simulators I am using right now, are things other people wrote for other large projects. I am just mooching on the end of what other people did. If I had to write the simulator, I would be like, "No."
CW (01:02:03):
That is it. That is different.
EW (01:02:04):
I could not.
CW (01:02:04):
I think that is a third role, that the people who maintain those kinds of test tools. Yeah.
KS (01:02:10):
For me, simulation is like magic. Oh my God, I am fascinated. How much of designing and science goes into creating all those simulation platforms. Man, that is next level testing.
CW (01:02:26):
I sound like I am a little bit down on it, but then there is stuff like Wokwi, which-
EW (01:02:30):
Wokwi is an online-
CW (01:02:31):
Simulator of several different development boards.
EW (01:02:34):
Development boards and processors.
CW (01:02:36):
And platforms.
EW (01:02:36):
Running different languages. It is amazing. You can do things that are not physically possible, because you do not have to care about current and power, and such plebeian things like electrons.
CW (01:02:48):
I think there is going to be an overlap of hardware-in-the-loop and stuff like that. Where yes, you are doing hardware-in-the-loop, but it is not real hardware. That is where it gets confusing. <laugh>
KS (01:02:58):
You see, testing is not a dead end.
CW (01:02:59):
Oh. No.
KS (01:02:59):
You can just keep on up-leveling.
EW (01:03:03):
I totally agree. Komathi, we have kept you quite a while. Do you have any thoughts you would like to leave us with?
KS (01:03:11):
Oh, yes. Absolutely. I would say, "Stay curious. Be the best friend of developers, and let them break things themselves, so they can build the systems faster." That is the main thing that I am focused on. And also think about how do you replace yourself? "How do I automate myself, as a test engineer?"
CW (01:03:36):
<laugh>
EW (01:03:36):
<laugh>
CW (01:03:37):
Or not, as a test engineer. Just yourself in general.
EW (01:03:40):
"I can replace myself with a small shell script."
KS (01:03:42):
I would do it, and then I will do something more fun.
EW (01:03:47):
You have a new website with blog posts, and more information about talking to you. Could you mention that?
KS (01:03:54):
Yes. I recently launched my website, thekomsea.com. It just happened. Again, one day I was just frustrated with the tools and automation and then the testing terminologies, that I, "You know what? I am just going to write about it, in a funny way."
(01:04:14):
So yeah, I started writing some <laugh> interesting blogs, especially referring to croissant versus bagel. Really caught a lot of people's eyes, and I am definitely in trouble for that. But yeah, then I thought, "You know what? I have a good way of thinking about how to explain some complex things with easy metaphors. Why not I just write more and see, stay curious, what comes out of it."
(01:04:41):
And then I realized I want to invest back into testing community, and I want to spread these awareness and teach more people out there how to do smarter testing. That is how The KomSea was created. It is doing well. I have some fun, exciting projects that will launch through this, that will help the community again. So yeah, that is my infant at this time, I am taking care of, in my free time.
EW (01:05:10):
Excellent. Your blog posts are very interesting and informative about testing, and the whole environment that many of us find, if not frustrating, still mysterious.
KS (01:05:23):
Yeah.
EW (01:05:24):
Our guest has been Komathi Sundaram, Principal Software Engineer, AV Simulation Testing, at Cruise. Her website is thekomsea.com. That is "the" K O M S E A dot com. You can see her presentation on hardware-in-the-loop testing at the Embedded Online Conference in mid-May. You can use the coupon EMBEDDEDFM for $50 at checkout.
CW (01:05:50):
Thanks, Komathi.
KS (01:05:51):
Thank you. Thank you for this opportunity. I had a lot of fun talking to both of you about testing.
EW (01:05:58):
Thank you to Christopher for producing and co-hosting. Thank you to Dennis Jackson for the introduction, and some of the questions. Thank you to Nordic for their sponsorship. We so appreciate it. Please sign up for their giveaway. And thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.
(01:06:17):
And now, I do not know, do you want to quote or a fact?
CW (01:06:22):
Fact.
EW (01:06:24):
All right! Hammerheads, which is Komathi's favorite animal, have incredible electromagnetic sensing abilities. One of the most fascinating facts about them, is their extraordinary ability to detect electromagnetic fields. All sharks possess special organs called "ampullae of Lorenzini," which allow them to sense electrical fields produced by other animals. That is how they find the little fishies.
(01:06:48):
However, hammerheads have these sensory organisms spread across that wide hammer-shaped head, which gives them the enhanced ability to detect even the faintest electrical signals. Gives them a nice wide time difference of arrival. I do not think they calculate that way.
(01:07:05):
This adaptation, because they are kind of late in the shark evolution, helps them locate prey hiding beneath the sand in the ocean floor. They just swing that little nose around, and they can detect the electrical impulses from a stingray heartbeat from under several inches of sand. That is why they are super power, and one of the most effective hunters in the ocean.