Embedded

View Original

440: Condemned to Being Perfect

Transcript from 440: Condemned to Being Perfect with Jeff Gable, Luca Ingianni, Chris White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White here with Christopher White.

JG (00:00:12):

And I am Jeff Gable, and I am joined by my co-host of the Agile Embedded Podcast, Luca Ingianni.

EW (00:00:17):

So we are doing a crossover episode this week, so we can get accustomed to each other. You are the Agile Embedded Podcast. How long have you been around and what do you talk about?

JG (00:00:34):

It has been about two years. Luca?

LI (00:00:35):

Sounds about right, yes.

JG (00:00:36):

Yep. We publish mostly biweekly. So we are about 43 or 44 episodes in. We essentially- the thesis of our podcast is that you do not have to choose between development speed and quality, and that Agile techniques can apply to embedded. We talk about different ways that you can apply Agile techniques to embedded development, and use that to develop more effective products more quickly. What do you think, Luca? Is that a fair summary?

LI (00:01:14):

I think that is a really fair summary, yes.

CW (00:01:17):

All right. Well, I am interested to be convinced. <laugh>

JG (00:01:22):

<laugh>

LI (00:01:24):

Convinced of what?

CW (00:01:26):

I have some experience with Agile. We can talk about that at some point, and you can tell me that the people that were doing the Agile where I was were doing it wrong, because I suspect they were doing it wrong. I have a mixed relationship with Agile.

JG (00:01:41):

It is funny, we almost feel bad naming it that. I do not know what else to call it. We have talked in the past about the big A "Agile" has been so overloaded and overused, and now means different things to different people. "Nimble" is the word that I have heard a few people say recently, that I liked. Basically, if what you are doing makes you more nimble, then it is a good thing. If what you are doing locks you in and makes you less nimble, that is a bad thing. That is maybe a word that helps you get back to first principles.

CW (00:02:19):

I like that.

EW (00:02:20):

Well, not short. When you and Luca were talking about your Agile Manifesto, there was some words that made a lot of sense to me, which was "risk management through feedback loops," which is going to make an awful acronym.

CW (00:02:37):

<laugh>

LI (00:02:38):

<laugh>

JG (00:02:43):

<laugh> It is not really any worse than "rolling on the floor laughing out loud," but yeah.

EW (00:02:49):

That is true. If I could tell a manager that, instead of using the loaded "Agile" word, I think that would be an easier sell. At least in some places.

JG (00:02:58):

I agree.

LI (00:02:59):

Yeah. The problem is if you deviate from this term "Agile," then you need to explain yourself at length. Like, "Well, we are talking about risk manager and this is not what I am interested in. I am interested in software development methodologies, or what have you." And you need to reign them in and say, "No, this is exactly what you should be caring about. I promise you that what I am about to tell you is a helpful thing." But on the other hand, of course, if you stick to "Agile," then you are inviting all kinds of-

EW (00:03:32):

You have to explain that too.

JG (00:03:34):

You have to explain yourself either way. Yeah.

CW (00:03:36):

And then you get people like me who are like, "Oh, no <laugh>."

JG (00:03:40):

Well, because you have had experience, places. We do Agile Scrum.

CW (00:03:44):

Right.

JG (00:03:45):

Which means we have standup meetings.

CW (00:03:46):

Right.

JG (00:03:46):

And that is about it. And we have retrospectives where everyone says, "Yeah, it was great."

LI (00:03:51):

Exactly. We have a Waterfall process with stand-ups. Woohoo.

JG (00:03:55):

Waterfall process with stand-ups <laugh>.

CW (00:03:57):

Well, yes, that is what it usually boils- That is what it has boiled down to, in the past for me.

EW (00:04:01):

My first experience with Agile involved standup meetings that were two hours long.

JG (00:04:05):

<laugh>

EW (00:04:07):

So ever since I have had this little Pavlovian response of, "Oh God, no."

JG (00:04:14):

I am so sorry.

LI (00:04:15):

Well, I have met somebody who told me that their team had trouble keeping the stand-ups short, and so they made a point of having the stand-ups on one leg.

EW (00:04:26):

<laugh>

CW (00:04:28):

Well, see, that was the problem. Every time I had stand-ups, we had, nobody stood up. Everybody we had were remote mostly, so nobody-Everybody was happy in their house, sitting in their comfy chairs <laugh>.

LI (00:04:40):

<laugh>. Yeah.

JG (00:04:43):

We have mentioned this on our podcast in the past, Agile does not mean Scrum.

CW (00:04:47):

Right.

JG (00:04:47):

You do not have to do sprints, if it does not make sense for your team. You do not have to do standup. If you are communicating, and constantly letting the team know if you are blocked and need help, then you do not- That is the only purpose of a standup is to enforce that. So if you are satisfying that some other way-

LI (00:05:09):

Not even enforced, but facilitate.

JG (00:05:12):

Encourage. Facilitate. Yeah, exactly. So do not do stand-ups if they do not make sense. We do not advocate at all a particular methodology. We advocate principles of, do things that make the forward flow of work, from the idea, all the way to as getting value out of that, which usually means shipping it to a customer. But if that is not appropriate, because you have, like for me, a medical device, those are what I work on. The release cycles are slower, but I try to do faster feedback loops within the engineering development side.

(00:05:53):

Then I encourage, as much as I can, the business to do faster release cycles, but there is a limit on that. Like if you do an FDA submission, they are going to take a long time to get back to you. There are ways around that, but the point is, just trying to encourage that fast flow of work to get value out, and encourage the feedback loops to get back upstream as quickly as possible. And there are lots and lots of ways to skin those cats.

LI (00:06:23):

Yeah. I think we are getting back to what Elecia said about Agilist risk management. The point is to de-risk developing your product, because you are working in this field of double uncertainty. Technical uncertainty. We do not know how to build it, or if it is even buildable. And product uncertainty. We do not know what we precisely should be building. So we need to de-risk this double uncertainty, by proving to us that we can build something, and that what we have built carries value for somebody. And that is the entire point.

(00:07:01):

And by the way, the Agile Manifesto was written in 2001. It is not like Agile sprung into existence then. I remember when I was studying in mechanical engineering in Germany, my professors would bang on about this thing they called, "[German], the engineer's approach." And that was at its heart, Agile. The idea was to come up with an idea, build a prototype, and then look at it. And then go from there.

CW (00:07:32):

That makes a lot of sense to me. The best times I have had have been working in small teams, where either I was leading the team or a part of the team, but they were small teams, less than ten people usually. We never had an official methodology in a lot of those places. But we settled on that kind of iterative process, even at medical device companies. I guess where I have some trouble, is when the teams get larger and when you have multiple teams. I do not personally know how to do that, how to keep that spirit going, or that effectiveness.

LI (00:08:08):

Yeah, that is of course something very difficult. That is something that the Agile community at large is currently struggling with. You get all sorts of proposals, like for instance, "the Scaled Agile framework for our enterprise" and all of these things. They all try to give you some kind of a framework to scale up beyond a single team, to teams of teams, or even teams of teams of teams. Or in the case of SAFe, I think, up to teams of teams of teams of teams.

(00:08:39):

So a couple of tens of thousands of engineers working on, a car, let us say. That is just really difficult and it will slow you down, because at the heart of it, you need to have that easy flow of communication, and that starts to break down. So the only way you really have is to remove dependencies. But there are some dependencies that you will not be able to get rid of. So I think, in principle, it will just have to slow down.

(00:09:11):

If you are interested in aviation, there is a really interesting book, which is called, I think, "Skunk Works." It talks about exactly this, the special projects branch of the aircraft manufacturer, Lockheed, and the guy who founded Skunk Works, I cannot remember his name.

JG (00:09:32):

Kelly, Kelly Johnson.

LI (00:09:33):

Kelly Johnson. Yeah, exactly. He said, "Skunk Works must never grow beyond 150 people, including the cleaning lady and the secretary and whoever. Because once it grows beyond that, I cannot maintain the speed that I have." They did exceptional things. The cycle time in the aviation time is like ten years roughly, usually. They were able to go from signing off a contract to first mission flight in 18 months, for aircraft that are supposedly impossible, like the SR-71, which I think still holds the speed record. He maintains that he was only able to do it because he had such a small team, and he was very aggressively keeping the iterations small and focused.

EW (00:10:27):

Skunk Works was special, in that they had insane amounts of money and intelligence concentration. It is like HP Labs was really good at doing things, because they could get whatever they wanted. The Skunk Works book is fantastic. I totally agree with you there. I just do not know how to apply that to everywhere, because somebody has to write the grocery store application. Somebody has to write the software for the refrigerator <laugh>.

LI (00:11:05):

The principles do not change. Fine, you do not have a shed full of jet engines at your disposal.

CW (00:11:11):

Why not?

EW (00:11:12):

Yeah.

LI (00:11:14):

Actually, yeah, why not? I had a professor at university who had two rocket engines sitting out back in his shed behind his house, because like-

JG (00:11:23):

Like you do.

LI (00:11:24):

No one wanted them, so he- Yeah, exactly. Like you do <laugh>. But the principles all stay the same.

JG (00:11:33):

Back to your question of how to scale this up, I think Luca has more experience with that than I do. I certainly do not have an answer. I think, if you just-

LI (00:11:43):

Nobody does.

JG (00:11:43):

No one does.

LI (00:11:43):

A lot of people pretend that they do.

JG (00:11:45):

Right. And if you say, "Okay, we are going to do SAFe." I have never used SAFe myself. I really doubt that it is effective, but I do not know. But I guess, the overall principles are, look at your value stream. What is all of the steps necessary, from when someone has an idea, to when it actually gets in front of a customer and starts producing value, and how do you shorten and automate that? That chain is going to happen over and over. Waterfall, the big fallacy of Waterfall, is that chain happens once.

CW (00:12:18):

Right.

JG (00:12:19):

Which is just not how life works. So if you know you are going to do that chain multiple times, smooth out that chain, and speed it up as much as possible, and that is it. If your organization is 10,000 people, the solutions that you come up with to do that are going to be very, very different, than with a team of less than ten people, which is most of my experience too.

(00:12:41):

Right now I am a solo developer, so I am usually, at least as far as the software, a team of one. So I get to <laugh> speed that value chain up as much as I want. I do not answer to nobody. But I still work with product development teams, so I am having to work with them and try to speed our collective development cycle time up.

(00:13:07):

The solutions you come up with to speed that up and then get information back upstream, are going to be very different depending on the size of your organization, the complexity of your product, what hand you are dealt. Do you have a team of all stars? Do you have a team that you inherited that is not great, that you have got to improve over time? There is no one simple fix here.

EW (00:13:30):

How is Agile that different from Waterfall spin that has good milestones? I have never done a straight Waterfall. It always spins.

CW (00:13:45):

It is funny you said that, because I was about to mention that when I have done FDA products, you have to include your software methodology as part of the submission, and all that stuff and document it. What I had done- This was many, many years ago before Agile had taken off. I would take Waterfall and I was like, "Well that iteration, that is dumb." So I would write in there and change the diagram and make a little loop in there. "We do this about eight million times, right here, <laugh>." Then I would submit that. Waterfall with this little spin.

EW (00:14:11):

Waterfall spin. And you get milestones. And those are what I often think of as Agile releases.

CW (00:14:20):

Right.

JG (00:14:21):

I would characterize that as maybe an Agile engineering development process within a Waterfall organization.

CW (00:14:27):

Ah.

JG (00:14:30):

Which is sometimes the best you can do. I think we literally even have an episode on this. You are a product developer and you want to get faster, but you are stuck in this organization. Sometimes the only thing you can do is do lots of spins in your organization. You have to push that practice of trying something, getting feedback and then going back and fixing things. You have to push that practice into the boundaries of the organization that you touch.

(00:15:02):

So if you are working with whoever is specifying the high level requirements, maybe it is in the end marketing, but it is essentially a business problem of what product are we going to develop. The faster you can finish your cycle, and then show that to those people, and hopefully show it to a user who talks to those people, kind of facilitate that larger loop. I think that kind of achieves what you are talking about.

EW (00:15:36):

In medical companies-

JG (00:15:37):

Makes sense?

EW (00:15:38):

Oh, yeah. And in medical companies, or things like Skunk Works, the end user often has no idea what they want <laugh>.

JG (00:15:47):

Right.

EW (00:15:47):

And will change their mind many times.

LI (00:15:50):

As opposed to anyone else.

EW (00:15:52):

True. Indeed. But sometimes consumer products, you do get that feedback pretty quickly, and if you have the right sample set, it actually works out.

CW (00:16:03):

You just ship the wrong thing.

EW (00:16:03):

Or you ship the wrong thing. But especially with large contracts, like Lockheed does, you have a contact that you show it to, and they are the customer. They are not going to take it back to their team until it is good enough. And then once they take it back to their team, they ask for a bunch of changes. How do you deal with Agile in your company being good, but your customer's not-

JG (00:16:36):

<laugh> Oh God.

EW (00:16:38):

Being particularly Agile? And so you are giving them all of the stories and asking for the feedback, and they are like, "It looks good. It looks good. It looks good. Oh, now I have shown it to my boss and he hates it."

JG (00:16:50):

Luca, you can speak to this as well. That is more of an interpersonal thing. That is an example of- Agile does not save you. That is just a basic customer interaction thing.

CW (00:17:01):

Yeah.

EW (00:17:02):

True.

JG (00:17:03):

An episode we are going to record, that I have had a conversation with Luca- One of your clients, who is a hardware development company, we actually wanted to record with them, because they have hardware expertise and they have very rapid hardware iterations. They came to Luca for help with, they are like, "But our software is- We are very slow on our software iterations." We are like, "What?"

CW (00:17:32):

That is backwards! It is backwards.

LI (00:17:34):

Yeah. It is so funny. They were saying, "Hardware, is so easy for us."

CW (00:17:37):

Hardware is the hard part.

(00:17:37):

Yeah. "It is so easy for us to iterate in hardware every two weeks, but it is so difficult for us to iterate in software quickly enough." And it is like, "What!?!"

EW (00:17:47):

<laugh> Never had that problem.

JG (00:17:49):

<laugh> I know. They have been in existence ten years, and at this point they have now built up the reputation where they just specify how they work. And do not take contracts with people who cannot work that way. I do not have a good answer for that either, Chris. If we go back to what Elecia pointed out that we had said before, if you couch it as risk management, which is essentially what it is. Like, "Do not you want to manage this risk as much as possible? Are we sure we are building the right thing? Do not you want to show it to your boss a little bit earlier?"

(00:18:30):

You are right that if people do not know what they do not know- If you are Steve Jobs', Apple vision, like, "I know what the customer wants before they do," then maybe you are satisfying Steve Jobs instead of the end user. But either way, whoever makes the ultimate decision, show it to them as soon as possible. But it does have to have a certain level of polish, maybe. Or maybe you can come up with prototypes, before you have built any product, that just gives some idea of what it is going to be. And getting feedback on that.

LI (00:19:08):

The point is not what exactly you are building. The point is to gather feedback, and end up building the right thing. Just like Jeff said, this is really not about Agile being wrong. You can run into the same kind of situation in a Waterfall process. In fact, you will, because you are setting yourself up to only show it to them after you have invested all of this effort into building it.

(00:19:34):

You are going to get it wrong anyway, let us be honest. Not because you are incompetent, but just because nobody knows the future that well, and no customer knows what they want that well. So in the end- By the way, we keep making that same mistake that I also always make of speaking of the customer in the singular.

EW (00:19:55):

Yeah.

CW (00:19:55):

Yeah.

LI (00:19:55):

For any most trivial products, you will have a bunch of people who very reasonably have demands on that product.

JG (00:20:03):

Shall we call them "stakeholders"?

EW (00:20:04):

<laugh>

LI (00:20:06):

If you insist. I do not like the term, but I do not have a better one.

JG (00:20:11):

<laugh> So I would actually turn the question back to the two of you, because both of you have a lot of experience developing products for customers, and in teams of different sizes and purposes. How have you tended to manage those interactions? Or do you stumble through them, and hope they do not happen too often?

CW (00:20:28):

Do you want to go?

EW (00:20:30):

Backup plans. Backup plan A through F, sometimes G.

LI (00:20:36):

Oh, risk management. That is interesting.

EW (00:20:39):

Well, but risk management in a different way. Say I am presenting a tool to go on a drone, which I have done, and that the tool may work in multiple different ways. When we get to the point where there is an official milestone, where we are meeting, where we are showing things off, I have already thought about what are the top three things they are going to ask about, and how can I mitigate those risks?

(00:21:14):

So it is actually the opposite of Agile. I am not asking them the questions. I am trying to anticipate what they want, so that when they say, "Oh, but we wanted the drone to fly in loop the loops," I can say, "Oh, yeah, I have a loop the loop process." That means that I have wasted some time, because I may have implemented things they do not need yet. But the idea is to try to anticipate what they need before they have told me. Which is always dicey. What about you, Chris?

CW (00:21:47):

You are so much better at that than me <laugh>. My answer is anger and frustration.

EW (00:21:51):

<laugh>

CW (00:21:53):

Yeah, I am trying to think of examples. There have been a couple of- I have worked at several medical device companies, various levels of rigor. The last time I did a medical device, it was a very founder-oriented company. The founder was very in charge of things, but he really did not know what he wanted. So we spent years in the iterative cycle delivering prototypes, him testing them in the lab, delivering prototypes, him testing in the lab.

(00:22:24):

That just became- I was worn out to the point where anticipating what he wanted was impossible. It was just, "Okay, he is not going to like it next week for some reason, so we will just keep writing the code and changing the hardware." That took years and years and it was very, very difficult. So in situations like that, I do not think there is a good- There is not much you can do.

EW (00:22:44):

Agile, Waterfall, special magic. Nothing was going to work.

CW (00:22:48):

That was acute founderitis. And there was no help for that. But at other places, I think I have tried to do what you do and anticipate what people need, especially when I am more in charge of this software design. But in a lot of teams I was not, and I was just a member of the team who owned a particular portion of the code. Even if I was senior, it was like, "Well, you are doing the DSP stuff, or you are doing the UI stuff."

(00:23:19):

That is more limited. You do not have as much opportunity to, "Oh, the customer might do this, so we have this other version." We could do it in small ways, but it is always frustrating for me. I think I get into a situation, and this is just a personal problem, where I second-guess the customer. In most cases, the customer, I am referring to the customer here, it is somebody very senior at the company.

JG (00:23:45):

Sure.

CW (00:23:45):

Because I was usually very senior in a software team, or running the firmware software team. So I was reporting to the CTO or the CEO. "Here is the software. Here is the firmware." And we would go back and forth. The times it has been most frustrating for me, is when they do not know what they want.

EW (00:24:05):

<laugh>

CW (00:24:05):

Even though they are very opinionated about what they want. I would deliver something that my team did that I thought was fantastic, and they would be like, "Oh. Yeah. No." <laugh> Then trying to explain to them why they were wrong, because I thought they were wrong, and I thought I knew what they wanted. Anyway, to sum up <laugh>, I have not had great success with managing customers <laugh>. And at places like Fitbit, I was so insulated from actual customers, that that never became a thing, really.

JG (00:24:46):

Right. Hmm. I am looking back over the interactions I have had. Since I have gone solo, I basically see those customers as having red flags and do not work for them.

CW (00:25:01):

<laugh> Yes, that is the correct answer, but it is too late when I am working full-time with a big company <laugh>.

JG (00:25:06):

Yeah. That is why I went solo, is that I have complete control over who I work with. I do a lot of pre-qualification and I could just say, "No, I do not know that this is going to work out so well." And do not take that particular project.

LI (00:25:20):

Yeah, but we are touching on something really challenging here, which is the entire subject of culture, is it not? If you are working inside a company, and you have got bosses like this, who do not really want to listen to you, but at the same time they want to tell you what to do, quite clearly that is not going to work.

CW (00:25:45):

Yeah.

LI (00:25:45):

That is the end of the story. There is just no magic bullet. You cannot have a standup, and that makes it right or something. This is something you as a collective organization, you need to work on this, if this is something you care about. Or you can just go on floundering about, which is what tends to happen because culture change is hard.

(00:26:10):

And it is very personal. It makes people very nervous. It makes people very defensive. Because maybe if one of those executives were to take a look at themselves and say, "Well, I have not been very helpful, have I?" That stings. There is no way around this.

(00:26:32):

Long story short, this is the holy grail. Call it Agile, call it DevOps, call it what you will. But having an organization where you can have open discussions, and where those discussions actually have a meaningful positive effect, that is really difficult to achieve. The same goes for customer interactions. You can have awesome customers and you can have quite terrible customers. And just like Jeff, one of the things I love about being a solo consultant, is that I can have a no [bleep]s. I just do not work with [bleep]s. End of story.

JG (00:27:11):

Amen.

CW (00:27:12):

Yeah. And I should note that Elecia and I are both consultants. <laugh>

EW (00:27:15):

Yeah.

LI (00:27:15):

Of course.

CW (00:27:18):

For similar reasons <laugh>.

EW (00:27:20):

To some extent, I worry that this is sadly not as applicable to most folks who are in companies, because they do not get these options. I have been there. I used to rail against design by committee, which sometimes Agile encourages. I want someone to have a great idea and to stick to their guns. The Steve Jobs method. I know that that is not always possible. It is not always good, if the person does not have a good idea, it is just not good. But there is a lot of downside to design by committee, especially the very long arguments from the two people who are never going to agree. And I still have to sit here and listen to them.

CW (00:28:07):

I think the position that sums up the situation that I think we are talking about, that I find most difficult, is all the responsibility with none of the authority.

EW (00:28:16):

Of course. That is why you are going consulting <laugh>.

CW (00:28:19):

You were only allowed to get things wrong, and you were never given credit for doing anything right. That is the very difficult position. Like you said, Luca, it is a cultural problem. I think a lot of the problem I have with software methodologies and development methodologies writ large, is a lot of times people do try to sell them as, "This will fix your company. Do this as a company." That is where I like, "You are not going to fix-"

LI (00:28:44):

Yeah. Completely agreed. Just like you say, none of this will fix your company. The hard work is in human interactions. It is so fascinating. I frequently talk to engineering managers, engineering leaders of whatever description, and they all tell me, "You know what, I used to start out thinking that engineering was about, I do not know, differential equations, or C++, what have you. And as it turns out, it is really about engineers having conversations, and the differential equations and the C++ code are just side effects of those conversations. If you do not have the right kind of conversations, you will not have the right kinds of side effects. That is it."

JG (00:29:28):

Yeah. In these cases, there is no magic bullet. There is no advice we can give, except recognize this principle of get to value and get feedback, apply that within your own sphere of responsibility and try to slowly, gradually expand that outward. And that is really all you can do.

(00:29:56):

I bet, Chris, in the story you told about the customer, when you were on the team, with Founderitis, and every other week you were presenting something new to the founder that they would inevitably not like. I bet you got really fast at- You architected your stuff, so that you were not spending six months building something that they can inevitably reject.

CW (00:30:15):

Right.

JG (00:30:15):

I bet you sped it up and minimized it, so that you were getting to that moment of inevitable negative feedback as it turned out. But that feedback, I bet you were getting to that point as fast as possible.

CW (00:30:26):

Yeah. No, the development cycle was very quick.

JG (00:30:29):

Very rapid. Yeah.

LI (00:30:30):

Congratulations. Very Agile of you. <laugh>

CW (00:30:33):

But to do that, an important point, to do that we existed in prototype. This was a medical device, which makes this a different situation. We existed in prototype mode, and once we shifted toward, "Okay, now we are going to start making the product," I threw all that out and started over.

JG (00:30:48):

Yes.

CW (00:30:49):

That was an important step, because all of that stuff was developed with a level of care and quality that I would not want to exist in the world. That happens too, where you are iterating so fast and you do not realize, "This is a prototype. This is a prototype. Do not ship this." Which matters for some products differently.

LI (00:31:08):

Well this is contrary at least to the Agile scripture, which says that, "You should always create production quality artifacts."

CW (00:31:18):

Yeah. But I cannot do that with a medical device. The FDA would come in with their guns and tell me to take a walk.

LI (00:31:24):

Ah, you can still do it.

CW (00:31:26):

I could not do it in that situation, given the level of change and requirements change, I could not document that. There was no way to do any traceability or document that, in that situation. I am not saying you cannot do it with a carefully set up system, but at that point, I did not even have a document control system. That was not going to happen.

EW (00:31:45):

I want to go back to Luca's, you should be writing production value code as part of the Agile. Is that part of the manifesto?

LI (00:31:54):

I think it is, yes.

EW (00:31:55):

I have read the manifesto. Part of me says "Yes," because you want clean good code. Part of me says "No," because I do make a lot of simulators and prototypes that are just to show off one particular piece. Looks like mockups or works like mockups, neither of which can be shipped. But if I can just make them go together, somebody will buy that product. I am surprised.

LI (00:32:24):

Yeah, but that is not the product. That is, as you said, it is a mock. That is a valuable thing, do not get me wrong. It is just not the actual product, is it?

EW (00:32:35):

No. But it is not production code either. It is something I am doing for faster feedback. So I feel like I am breaking one rule to-

LI (00:32:44):

If you are building a clay model for wind tunnel testing, nobody will think that this is going to be whittled down into the actual car that you end up selling. Of course not. It has a clear purpose, it has clear value. So within the confines of clay models for wind tunnel testing, it should be a good model.

CW (00:33:06):

Ah.

LI (00:33:06):

But it is not the car, is it?

CW (00:33:09):

So, okay.

EW (00:33:11):

Moving that to software, there is a place for not writing production code. There is a place for making mockups and models, simulations and quick and dirty tests.

LI (00:33:24):

Oh yes, of course.

CW (00:33:25):

Is it a statement of quality, Luca?

EW (00:33:27):

The production part?

CW (00:33:28):

Yeah.

JG (00:33:29):

Yeah. I also got hung up on that same statement. So explain yourself, sir.

EW (00:33:34):

<laugh>

LI (00:33:37):

<laugh> Yeah. Well, I do not have to explain myself, because it is not my words, but Agile Manifesto's <laugh>. No, none the less though, the point is, if you are working on your product, then it should be, as the Agilist say, it should be potentially shippable. You should be in principle able to ship this, whether you end up daring to do so or not, or whether the FDA forbids you, is beside the point. The point is, you do not build sloppy product.

CW (00:34:08):

Yeah.

LI (00:34:09):

You are welcome to build sloppy prototypes, and then throw them away.

CW (00:34:13):

Right.

LI (00:34:15):

But you are not allowed to build sloppy product.

EW (00:34:18):

I feel like this is one of those pieces where Agile is for the software world, and not for the embedded world.

CW (00:34:24):

How so?

LI (00:34:25):

I love that argument, because everybody comes up with it. I have not yet been convinced that there has been a true instance of this. If you are building an actual-

CW (00:34:35):

Fight. Fight. Fight.

EW (00:34:35):

<laugh>

LI (00:34:36):

Yes. If you are building actual product, not prototype- A prototype, what is the value of a prototype? The value of a prototype is in answering a specific question. Within those confines, it should be well crafted. It should give good, clear answers to the question you were asking. And then you throw it away, because clearly it is not your product, and you will not ever ship it, or anything like that. Or implant it into patients.

(00:35:06):

So even if we are building hardware, or things that smell a bit like hardware, like simulations for instance- This is the point why mechanical engineers or electronics engineers do simulations, because actual physical prototypes are just too complicated and too expensive, compared to software. Software prototypes are cheap and instantaneous, or free and instantaneous, in fact. Whereas if you are a civil engineer and you want to build a bridge, you do not prototype. You would love to have an actual increment of your real bridge standing in your real valley, but that is awkward and expensive. So you may do with some simplified version of that.

(00:35:48):

Software has the awesome property of enabling you to build real life bridges and having real life trucks drive over it, metaphorically, ten times a day. But you should be as close to that as you can. You should still have a very clear distinction of what is my actual product, that I am working on and that I will eventually offer to my customer. And what is just a prototype that has the value of answering specific question that I have, a technical question that I have. Should I build it this way? Can I build it this way? That is fine. You answered the question. You throw it away, and then you work on your product and you do it properly. Whatever that means in your specific context.

EW (00:36:34):

Is your prototype in this case a branch of your firmware? Or is it a separate repo? I know that that is a technical detail.

LI (00:36:44):

I do not care, as long as you throw it away afterwards.

EW (00:36:47):

Yeah. See, I have plenty of times where my prototype becomes part of my product, and I do not throw it away. I throw away the six other pieces of code I wrote.

CW (00:37:01):

The surrounding driving code.

EW (00:37:03):

The cruft that I put in, because I was not sure which direction we were going. As I move to more production, it is a matter of getting rid of that code. I do not write production quality, because I am writing extra stuff I do not need. That I do not want to be in the production, because it is a maintenance nightmare if we are not using those features.

CW (00:37:22):

Well, yeah, no. That is an interesting point, because with embedded software, there are often a lot of debug tools, right, that are not part of the product.

EW (00:37:31):

Debug printfs. That serial console. Yeah. Is that production code or not?

CW (00:37:37):

Assertions that you take out. Things like that. I do not know how to answer that question.

LI (00:37:43):

Oh, that is a fun question, because as far as I am concerned, logging is an aspect of production.

EW (00:37:51):

Not if your code is-

LI (00:37:53):

If it is printfs during development and during debugging, and then you will throw them out later, then no, then it is not part of your product and do whatever you want. But if it is logging, then it is a first class component of your product. Just like documentation, for instance. Just like anything else.

EW (00:38:13):

<sigh>

LI (00:38:13):

I know I am really strict about this.

CW (00:38:17):

<laugh>

EW (00:38:18):

But what if- Okay, so I have worked on children's toys. Educational toys for Leapfrog, a company which sadly no longer exists. One of the things that we had was debug printfs, like everybody, and we had logs. We could turn them off in the final product, but we did not usually, because there were times where if something went systematically wrong, we could get to those logs.

(00:38:48):

But if I did the same thing in a medical device, and those logs included private information, that is now a security risk. Although I need it for the certification process, I want to be able to turn those off. I want to be able to clear those, because they are a security risk. So is that production code or is that prototype code? It is something I need to leave in the final code, but I need to make sure they do not-

LI (00:39:21):

That is production code, in the sense that you need it to get to a releasable product, is it not?

CW (00:39:29):

You are releasing it, therefore- I mean, it is in the system, so yeah.

EW (00:39:33):

Okay. But it is not creating value. It is-

LI (00:39:39):

It is creating value, because it makes the FDA happy. And if the FDA is not happy, then your customers will never get your device. That will make them unhappy.

JG (00:39:47):

I feel like the distinction here maybe is not useful. I think everyone knows like an Arduino prototype, is a prototype.

EW (00:39:55):

<laugh> Yeah.

JG (00:39:55):

And then you throw that code base away, and you write it on top of a real RTOS or something else. But to me, I would say, whatever code you are writing, write it with quality that is commensurate with how it is going to be used.

(00:40:11):

We all know the anti-pattern of someone threw together something slap dash. And then it works, and then that stays in the final product. It is technical debt. Technical debt can be okay, but if it was thrown together in the beginning, without regard for error edge cases, or what if this particular thing happens, and that is going to go wrong, and you are not handling it. That cannot make it into production, at least in the safety critical application.

(00:40:41):

It sounds like this code that you were writing, that stored logs and you used that during maybe the V&V process, and then cleared them as the device went out the door. Like, the last step is, "Yep. Those logs look good." Now the step is in manufacturing, is clear them and disable that functionality, and then the device goes out the door.

(00:41:00):

I do not know that it is really useful to debate whether that is proto- To me that is production code. Because it does not interact with the patient, maybe you do not have a line in your traceability matrix where that could cause patient harm, because that is mitigated by the fact that you are turning it off before it goes out the door. So I do not know that it is really useful to get hung up on that debate. Does that make sense?

EW (00:41:28):

It was an example of other code that- I cannot tell the difference between prototype and production all the time.

CW (00:41:36):

I do not think any of us are disagreeing that you should write good code.

EW (00:41:39):

Oh, no.

CW (00:41:41):

And that when you write code at a company for a product, whether it is prototype or not, it probably should follow whatever software development process you use, and be reviewed, and have a design maybe, and some documentation and stuff like that.

(00:41:55):

I think where I got hung up with Agile, and maybe this is a misinterpretation, is, this is the Sprint model, which is maybe not Agile, but it is conflated with it, is, "Oh, you should have a minimum viable product shipping every sprint." And with embedded, it is like, "Fantastic. You have a SPI driver for a light, and a CLI where you can type hello."

EW (00:42:15):

<laugh>

(00:42:17):

After two weeks, that is what you get <laugh>. So that always struck me as weird, because there is nothing minimum viable you are going to have until six months from now.

LI (00:42:27):

But that is...

CW (00:42:30):

I think that is my misinterpretation of that.

LI (00:42:32):

That is what it feels like. I am sorry, you wanted to say something, Jeff?

JG (00:42:38):

I was going to say that, yes, there is going to be a certain amount of time before you get something that is minimally viable.

LI (00:42:48):

Exactly.

JG (00:42:48):

That is just the example of you cannot fix scope and time and what is the other leg? I cannot remember. Content. Well, no.

EW (00:42:57):

Cost?

JG (00:42:57):

Scope, time and team size maybe. If you fix all three legs of that triad, you are going to fail. So the classic Agile way is get to shippable, define that minimum scope, get to that minimum scope as soon as possible, and then improve from there until you are out of time or budget. A lot of teams, that is explicitly how they work, and that is fine. Once you get to that minimum scope, then try not to do a lot of work that does not get folded into something shippable.

CW (00:43:30):

Right.

JG (00:43:30):

That is by definition your cycle time that you want to shorten. Do a little bit more and then incorporate into that shippable product. The little bit, you want to get that to be as little as possible that you are doing without it actually being shippable. Does that make sense?

CW (00:43:45):

No, that makes a lot of sense.

LI (00:43:46):

Yeah. I want to also speak about MVPs for a little bit, just because it is such an overused word. People are often not as clear on the definition, as would be helpful for the conversation, I think. So, what is the point of an MVP? The point of- The thing that you create after, let us call it a two-week iteration, is not an MVP. An MVP is sort of a particular milestone, if you want to, which is the minimal set of functionality that you feel confident to show to your customer, in order to get feedback. It is not even- It is far from complete. It would be terrible if it were complete.

CW (00:44:29):

<laugh>

LI (00:44:29):

It must be as incomplete as you can make it, so that you can optimize for fast feedback. So in that sense, it is similar to a prototype, but I like to say that a prototype answers a technical question, "Can we build it this way?" And an MVP answers a value question, "Should we build it this way? Does anyone care? Can we get anyone to buy it?"

(00:44:56):

So it can also, by the way, be very, very different from your final product. I once talked to a founder of a job market website, some typical two-sided market where people could, I think they could post jobs, and then people could apply for those jobs, that kind of thing. What they did was, they created this very simple website without any business logic. I think they had a contact form, and it dumped straight into a CSV file or something. Then the quote unquote "business logic" was him and his co-founders sitting down and manually sifting through those CSVs and saying, "Okay, this is a match. This is a match." And driving the supposed business logic.

(00:45:40):

The whole value of this was, that they did not actually get burdened by building the business logic. They could slap together this website, between the second and the third beer, and then see if they got anyone to care, could they even get traction in the market? And of course, once they discovered, yes, they got traction, then yes, they build the business logic.

(00:46:04):

But only then, only after they had answered the question, does anyone even care? That is the point of an MVP. Like the saying of the Lean Startup book, "If you are not embarrassed by your MVP, you waited too long." So make it as simple as you can, but do not confuse it with your final product, and do not confuse it with a prototype, because they answered different questions.

JG (00:46:31):

It is funny, we have argued over about this. I am going to argue with you, and I have a feeling Elecia and Chris have the same concerns that I do over that statement. So, <sigh> when you are working on an embedded-

EW (00:46:42):

The resounding silence. <laugh>

JG (00:46:47):

<laugh> You are like, "Hmm, dead, crickets." So when you have an embedded product, and you have distribution networks, and something has to be hanging wrapped in shrink-wrap package hanging on the shelf in a store-

CW (00:47:01):

It is January, it is CES.

EW (00:47:02):

<laugh>

JG (00:47:03):

Yeah, exactly.

EW (00:47:05):

If it is not in the stores by October, you cannot ship it for Christmas <laugh>.

JG (00:47:09):

Can't ship it for Christmas. That is different from a SaaS application where to the user it looks the same, and you are, I think they call it Fred Flintstoning in the background, where you are manually walking instead of the wheels actually being powered. But to the user, it looks like that is the experience they have. They input their data and they get their answer and they are happy with it. They do not care that you did it manually behind the scenes.

(00:47:36):

There is a fundamental difference with embedded products that are actually hanging on the shelf. To me, I would define MVP as the minimum set of functionality that you are okay with hanging on the shelf for the user to take off the shelf. And anything short of that, I would care - If you are showing it to users, getting feedback and going back into your development process, I personally would call that a prototype. Do not want to get into a semantic argument about it, but I would call that a prototype and it is the MVP is what actually hangs on the shelf.

(00:48:10):

If you can de-risk your program by building updates, update ability into that, if- It is a big step, because for a child's toy you do not want to say, "Well, now we have to make it an IOT device just to get software updates to it." You got to make your best guess. That is adding a whole other level of risk that maybe was not appropriate.

CW (00:48:31):

Ah, the number of times <laugh>.

LI (00:48:33):

Yeah, of course your approach will depend on the actual product that you are building. Maybe your first iteration of a physical product that at some point a truck will come and take it off your hands and then it is gone. So in order to iterate more quickly, maybe your first situation will have a stronger process, and you will do a lot of, I do not know, filtering post-processing in software.

(00:48:57):

And then later on, you will do cost saving measures, and put some of the filtering in hardware and go with a cheaper processor, or what have you. So your specific approach will depend on your specific product, and the environment. Yes, of course. And some things that work in the SaaS space do not work for embedded. Many things, perhaps even. That is fine. But the point is the approach, not implementation details.

JG (00:49:30):

I really like that specific example, of the first few products that you ship out the door that actually are used by real customers, you do not care as much about BOM costs.

EW (00:49:39):

Right.

JG (00:49:41):

That is maybe an example.

LI (00:49:42):

Exactly.

JG (00:49:42):

You want something tangible that you can latch onto, like, "Yeah, I will not care about BOM costs for the first 200 units, because I want to see if they actually sell. If they fly off the shelves, then I can- Yes, I am losing money on those, but I can do then a BOM cost reduction effort, and then the next thousand that hit the shelves are cheaper. Beyond that, they start actually making a profit. But I do not want to spend all that extra time optimizing my BOM cost, if I do not even know if they are going to sell."

CW (00:50:11):

Yeah. And that works great for a lot of things. Medical devices, low volume, high ticket price things, and I have done that. For other stuff like Fitbits, well maybe. They did not start out as a big hit to start with.

EW (00:50:29):

We always had users.

CW (00:50:32):

Yeah.

EW (00:50:33):

Even before things got shipped, it was always the over-the-air update that is the feature that has to go into the production units, but does not have to go into the user test units. And yet that is such a huge feature.

CW (00:50:53):

Especially since in later days, most products do not ship with firmware that works.

EW (00:50:58):

Right. It is just the over-the-air update.

CW (00:51:00):

To buy the poor firmware people some more time. The hardware-

EW (00:51:05):

To buy- Six months between-

CW (00:51:06):

The hardware in Target before we are done <laugh>.

JG (00:51:09):

Yes. The zero day updates. Yeah.

CW (00:51:11):

Yeah.

EW (00:51:11):

Yeah. Hate those.

JG (00:51:15):

Something you just said, the internal testing development modules do not have to have remote over-the-air updates, but the production modules do. And that is where if I were coming in and looking at that team, I would push them really hard to get that OTA functionality done as early as possible. And that is what you are using to update your internal testing units. Because then that process becomes more bulletproof, and you start to- It is all about risk management. You do not want to do the very first time you do an OTA update, is when you have got 10,000 units out in the field.

EW (00:51:53):

<laugh> I totally agree with you. But that is anti-Agile.

CW (00:51:57):

Is it?

LI (00:51:58):

Why?

EW (00:51:58):

Because over-the air-updates are not something you are getting feedback on from the user. It is not something that is part of the user stories. It is part of the engineering team's ease of use and updateability.

CW (00:52:13):

It is still an end user feature.

EW (00:52:15):

It is kind of, but-

CW (00:52:17):

It is not one they like.

EW (00:52:18):

But so many times I need to test the functionality of the device to figure out if it is market viable, and adding over-the-air update before figuring out that it is market viable is not good.

CW (00:52:31):

Oh, I see what you are saying.

LI (00:52:32):

Sure.

EW (00:52:32):

That is a lot of work to put upfront. Even though I totally agree with Jeff, that if I go into a place that is one of the things to do, is because that is important and you should do it, it will make your lives easier. But then you still have to have backend software that may take six months to update that. And it is just over-the-air update is one of those features that I have trouble with with Agile.

LI (00:52:56):

No, I do not agree actually, because just like Jeff and just like the two of you, I would totally insist on having that sort of thing early on. Just like logging for instance. One of the first things you incorporate is a logging framework, so you can always tell what the device is doing, so you are closing a feedback loop.

(00:53:15):

Maybe do not make the mistake of saying, "User value is only created if something is actually visible to the user, and perhaps flashy or something." A working, I do not know, data backup is really important even if your user never sees it, and hopefully never needs to use it. But clearly this provides value. And the same goes for OTA.

CW (00:53:41):

Yeah. I think I can fix all of this for all of you. What if we made the software update have a really cool animation with fireworks and little narwhals swimming across, and so the user does like it.

EW (00:53:51):

<laugh>

LI (00:53:51):

Oh yes, please.

CW (00:53:51):

And so they can have feedback on the animation, and then it fits into the framework. I mean there are always exceptions to this kind of stuff.

JG (00:53:58):

Yeah, of course.

CW (00:53:58):

If you are so rigid about- This is why I get nervous about software methodologies. Because a lot of people who are very rigid about it, you talk to them and it is like, "I cannot do that. I have to make this exception here for this or this, but I can do most of this." And my thing for software methodologies when I talk to teams is like, "Please have one."

EW (00:54:18):

Yeah. It is like a coding guidelines, "Please have one. I do not care what it is."

CW (00:54:22):

I do not care what it is. Please have one. Because having one is so much, millions of times better than not having one. Even if it is a terrible methodology <laugh>.

JG (00:54:30):

And then improve it over time. If something about it does not work, fix it.

LI (00:54:35):

Exactly. Be Agile is an Agile way. <laugh>

CW (00:54:39):

He is right. Adapt it to your organization. That is what I have done many times with stuff. When I have run small organizations does this fit like, "Well, I am going to take a piece from here and here and this seems to work for us." But I do not have a textbook and got it on table saying, "Well, chapter four, we did not do this right, so we got to start over." <laugh>

LI (00:54:56):

Yeah. I think this is so important, because people feel like they must get it right the first time. No, you going to get it wrong in some aspects the first time around. I think one of the beauties of working in an Agile way. You are not condemned to being perfect or trying to be perfect. You can figure stuff out as you go and find your own way, and that is going to be good for you. And what the best way for you is now, is going to be different from what is going to be like in five years. And that is perfectly okay.

EW (00:55:34):

I want to change subjects for a bit. As we were preparing to chat, unit testing came up, which I have always strongly associated with Agile development, but you think it has some pitfalls and some cons?

CW (00:55:54):

Oh, please tell me I do not have to do it.

EW (00:55:56):

Tell me. <laugh>

JG (00:55:57):

<laugh> Just a little background for the listeners. I think this was someone who maybe asked you a question? A listener of yours?

EW (00:56:03):

Yes. One of our Embedded listeners, Matty C, asked about what are the cons of unit testing, and CI and CD, and short feedback cycles, anything.

JG (00:56:14):

And I love the wording of Matty C's question, "Most Agile TDD for embedded discussions I have read, heard or have been a part of, focus on unit testing, good. CI/CD pipeline tools, also good. And also short feedback cycles, very good." Basically yes, all of those things are wonderful and do them, but, "What are the cons? Anything that shiny is hiding something under the surface."

EW (00:56:37):

Yes <laugh>.

JG (00:56:38):

I love that phrasing. So focus on each one of those things. I will get to unit testing in first, because I think that is the most subtle one. CI/CD pipelines, I would say are an unadulterated good. Personally, I think the only downside to those is that yes, you have to invest a little bit of time if you have never done it before, to learn how to set it up. Suck it up and do it. I think you should always have a CI pipeline. I would maybe call it a "build pipeline," because as we have talked with Jonathan Hall recently on our podcast, CI and CD have definitions that you may or may not be following.

CW (00:57:20):

Oh, right. Yeah.

JG (00:57:22):

Continuous integration is that practice of merging back into the trunk as quickly as possible. Trunk based development is now a more modern way to describe that, whatever. But the automatic build pipeline that runs automatically as many steps as you have created for it, at a minimum it is building all the artifacts, hopefully also running automated tests. That is an unadulterated good. So do that.

(00:57:47):

Short feedback cycles is just a global principle that yes, I say is an unadulterated good. How you implement that at your organization is what we have been arguing about for the past 45 minutes.

EW (00:58:00):

<laugh>

JG (00:58:00):

Unit testing, I would say it is good, but it is very hard to do well. You can very easily unit test your way into a corner, where you have now this brittle suite of unit tests that is slowing you down.

CW (00:58:16):

Oh, yes <sigh>.

JG (00:58:19):

I want to be very careful where I am going to say, "Well, if it slows you down, then you are just doing it wrong. Like that no true Scotsman garbage." It is hard to do well. There are parts of software I have written that I do not bother unit testing. Luca is a big advocate for this. He is like, "If you do not-" I would say basically, things that are more risky where you can mitigate that risk with unit tests, please do so.

(00:58:48):

I think that test-driven development is a skill that is very hard to do well, but provides a lot of value once you get good at it. But there is a learning curve there. On the early part of that learning curve, you can actually decrease your development speed. It is just difficult to get right. But it does have a lot of great benefits. It enforces an architecture that is easy to test.

(00:59:11):

I would say it is bad during the prototype stage. When you do not know what you are building, and you are very rapidly just trying to answer those prototype questions, do not bother with a lot of unit tests. Because that is not where you are trying to solve right now.

(00:59:27):

Unit tests are a way to make sure that your module is correct and handles a lot of corner cases well. Those are the kind of modules for which it works very well. If you are just trying to throw something together to answer a question of whether or not someone will buy this, I personally would advocate against doing a lot of unit testing, and certainly not doing TDD.

LI (00:59:51):

Yeah. Similarly also, if you have got an old code base, which is, as old code bases tend to be, not very well covered in tests, then as you start unit testing and maybe even TDD, some people think that they must now cover everything in tests. I say do not, because what is the point? What are we trying to do with testing? We are trying to increase our trust in the product. We are trying to prove to ourselves that there are no issues with it. Now with your old code base, you already know. It has been running out in production for ten years, for better or worse, you know what to expect. What is the point of testing something, when you already know the answer. So do not bother.

EW (01:00:38):

But when you are going to go change something in an old code base, maybe add unit testing then.

LI (01:00:43):

Then you should cover in tests. Yeah.

JG (01:00:44):

Would be a very good point.

CW (01:00:46):

I would say, from my experience doing tests, and how test suites tend to, I am going to use my favorite word, accrete over time. At Fitbit and Cisco- At Cisco one point, the example I love to give is, there was a static testing suite that ran anytime you tried to make a commit, and it would take 24 to 48 hours.

EW (01:01:07):

By which time-

CW (01:01:07):

By which time the trunk had moved on, and you were doomed and had to start over.

LI (01:01:13):

Yeah, been there.

CW (01:01:15):

My feeling toward unit tests is you need to rate the code that you are considering writing a test for on a few factors. How often is this likely to change, and how critical is it to your core functionality? And in general, how much risk is this? And I think you have mentioned that.

LI (01:01:34):

Yeah, we are going back to trust, are not we?

CW (01:01:36):

But I am not going to write a unit test for my SPI driver. I am just not going to do it. I am going to get it to work, and then that thing is going to stay there for a thousand years and no one is going to touch it. Historically looking at the kinds of code that gets touched a lot over the course of a product development, there are certain things that once they are written, nobody goes in there anymore. Once they are written and working and tested. They need to be tested, but does not necessarily need to have a unit test that runs with the suite.

EW (01:02:03):

Not unless you can put a logic analyzer in there too.

CW (01:02:06):

The higher level you go up in the stack, is when you start to get things that have a lot of churn, a lot of people in there. That is where-

EW (01:02:12):

Algorithms.

CW (01:02:12):

Algorithms, customer facing UI stuff, things where there are timing and race conditions, and inter-module communication. That is where unit testing really has a lot of value, because people are going to break that stuff constantly. It is kind of a paradox like, "Oh, how do I know what people are going to break constantly, unless I am testing everything?" But that is where some experience and understanding of the architecture, and some history with the kinds of modules that people write, is useful I think.

LI (01:02:40):

Yeah. I think this is one of the points where you can really trust your instincts as an engineer. What aspects of your code do you feel uneasy about? Those should be covered in tests. The things you do not worry about, whatever. Test them, do not test them, does not really move the needle. It tends to be that engineers have a fairly good understanding of where the critical aspects of their code are. If you ask an engineer, "What scares you the most about your product at the moment?" They will be able to point to it. And this is what you might consider covering in tests.

CW (01:03:17):

Yeah, I would write a test for every piece of code where you have a comment that says, "I do not know why this works, but do not change this."

EW (01:03:23):

<laugh>

LI (01:03:24):

<laugh> Yes.

JG (01:03:26):

Oh, dear. There is a phrase, I cannot remember who said it, "Write tests, not too many, mostly integration tests."

CW (01:03:32):

<laugh>

EW (01:03:32):

That is right.

JG (01:03:36):

I cannot remember who said it, so apologies to the luminary out there who came up with that aphorism. I am looking at this device on my bench right now, and Luca and I have talked about this in the past. I have a suite of integration tests, that basically use the CLI over a serial, and tell this thing to do something and read some data back. That is exercising a lot of functionality. It is nice because I now have this automated suite of these things that I run, and it takes probably five minutes, because it actually does a lot of different things.

(01:04:15):

I can run that every evening or whatever. I do not run it on every commit. But when I am done for the day and I am packing up, I run it and I immediately know. Or whenever I feel not confident about something, I worry if this broke something, I run it and I go get a cup of coffee and I come back and it is done. And it makes me feel better. Not everything in this device is unit tested, but anything that I did not feel comfortable with, especially error injection. Like, if a module does not see an error come along very often, but when it does, it needs to do the right thing. Even if that is a SPI driver. How often do you get SPI frame errors?

CW (01:04:58):

Once a career.

JG (01:04:59):

Not very often, yeah. But if there is something where like a SPI driver has to handle some situation correctly, I might put a unit test on that.

CW (01:05:12):

Yes. Safety critical is another aspect.

JG (01:05:14):

Exactly. Then that might drive the architecture of my SPI driver to separate concerns. I am not going to test that it writes the correct values to registers. Some people do that. I do not. I would put that behind a very low layer shim abstraction, and then unit test the layer above that, to make sure it does the right thing, given what it is seeing from the registers.

EW (01:05:39):

Okay. Moving on. Another listener question. This one from Doug G, who clearly is trying to poke the bear. "Requirements, specifications and documents. Can Agile help?"

CW (01:05:51):

Oh, no.

JG (01:05:53):

I am going to go get a snack.

EW (01:05:55):

<laugh>

LI (01:05:55):

<laugh>

JG (01:06:00):

What I advocate with documentation is, as much as makes sense. Make your documentation automatically generatable. So, for medical devices, we have talked about this fast requirements, like we- The Agile way to develop requirements is to have some collaborative repository. So you are not like have a Word document stored in Dropbox, which I have done a hundred times, and you are yelling at people across the office, "You just overwrote my changes." "No-" And [bleep] was flying all over the place.

(01:06:32):

The Agile way to do that is to write it in an environment that is collaborative, because you know you are going to have to collaborate on them. If you need to then take that from that collaborative environment, maybe it is a wiki or some kind of requirements tool database. Then if you are copying and pasting that into Word documents, that is error prone. You are going to have to do it a bunch of times, so you might as well take the time to write a script to automatically generate the document, in the right format that the FDA is going to expect.

(01:07:09):

That is how you apply- That is an example of where people- The Waterfall methodology assumes you only do that process once, and nothing in real life works that way. After you have done it the second or third time, and you are early in the project, and you realize you are going to have to do it a hundred times, take the time to automate that and then it is just taken care of. Now that feedback cycle is very quick every time, and you are always in a shippable state. Does that help for requirements and documents? What do you think?

EW (01:07:39):

I like the idea you are always in a shippable state. Agile can help with documentation and specifications and requirements, if you look at it from the perspective of risk management and fast feedback loops. In some cases where you cannot make a prototype to show people, when you are talking about satellites or big equipment-

LI (01:08:05):

Why cannot you make a prototype satellite?

EW (01:08:08):

Oh, let me finish. You cannot make one initially to show people, so you give them the requirements, you give them the specifications, and they can okay those. Those are the documentation form of minimum viable product.

(01:08:25):

You cannot necessarily make a satellite on earth that functions as it would in space. I mean the James Webb Space Telescope, I could tell you probably was not developed without requirements and specifications. Those still can be developed with Agile, because it is about communication. It is about showing people what you are going to build, and getting them to understand early in the process if that is what they want. Do you agree Luca? It sounded like you were not really sure about the whole satellite thing.

LI (01:09:01):

No, I am perfectly happy with what you are saying. This is a trope that I get quite annoyed with really to say, "Haha, we are Agile. We do not really need requirements, or documentation or anything of that nature." Of course you do. You just accept that they will change over time as you learn more. That is all.

(01:09:22):

But just like you said, if some aspects of the James Webb Telescope cannot be tested on earth, or need to be figured out first, you need to write them down. You need to make them understandable, shareable, communicable. Is that a word?

EW (01:09:44):

Yep. Although usually it is used with diseases, so maybe not.

LI (01:09:47):

Okay, fine <laugh>. But I guess you know what I mean. And that is really the point. Is it useful to somebody? Are you transmitting valuable information to somebody? That is the point. If you pull that off, then you are doing it right, by definition.

EW (01:10:04):

I do really like to think about specifications and requirements as early Agile. It is Agile before you do the code. And they are never fixed. They are part of the conversation of, "Is this what you want me to build?" And of course there are times when that does not make sense. You do not give your customer a requirements document, if they are a consumer <laugh>.

(01:10:29):

But if you are working with space equipment or aviation or medical, yeah. You are doing the first step of, "Is this what you want me to build?" And it is not fixed in stone, until you give it to the FDA, in which case it is.

CW (01:10:46):

Yeah. I have always had trouble with- There have been people I have talked to who said, "Well, why bother doing requirements? They change." Or, "Why bother doing a design? It is just going to change." It is like-

EW (01:10:55):

So you think about it!

CW (01:10:57):

Why bother writing code? Why bother doing anything <laugh>? It is an illogical position, but my gut feeling is some people are like, "I do not like writing documents. I do not like thinking about this. It is much more fun to write code. So if I can come up with a justification for not having to do this, that sounds really good. Well, the customer is always changing the requirements, so it is not worth writing those down." I think that is crazy.

JG (01:11:24):

Yeah. Because I focus on medical devices, and I have for so long, requirements are just part of the deal. But in consumer products, I would say, I do not think you would have a separate requirements and then a design. You would have documentation of how are we building this darn thing. And it has only the level of detail that makes sense, so that the upstream people can look at it, and understand it and agree with it. And the downstream people can not step on each other's toes. If you get too detailed from that, then it is never in sync with the actual code, and it starts to lose its values. So change that.

(01:12:05):

Again, because I am in medical devices, you have the requirements, and then you have the design. The only thing that I do about applying Agile, is just recognize that it is not this linear thing, that some people even still have. They even have in their Microsoft project schedules, these gated things, like the requirements are here on this date, and then they fix, and then the design document comes here.

(01:12:34):

That is where I have to take them by the ear and say, "No, that is not how real life works." Yes, you spend time arguing over requirements upfront. Again, that is kind of that minimal set of requirements. If we have to argue over what it is going to do, if you want prototypes or mockups, or whatever, we can do that. We can do all that. And let us get to the minimal set of requirements, design and architecture for that.

(01:12:59):

There are always going to be tight feedback loops, everything- We get to, "Oh crap, we cannot build that. The CPU is too slow. Either we have to have a faster processor, or I have to do this loop slower. How does that impact the functionality?" And all of those things. You just have to do it many, many times. So do not introduce a lot of friction into each one of those points.

EW (01:13:23):

I have one last question for each of us to answer. From Scott Watson, "What is a good elevator pitch to give for Agile to a skeptical manager, especially one who has actively resisted Agile processes, even while the surrounding teams are adopting them?" We are all arguing for Agile here.

CW (01:13:43):

No, I am just trying to think of the- Yeah, that is very specific.

JG (01:13:47):

I am not going to come up with the- You were mentioning earlier that couching it in terms of risk management is a great sell. So I do not know if we can come up with a pithy 20 second version of that.

LI (01:14:01):

The thing that I am reading here, is that this feels a bit like a real life situation, and there is just somebody who-

CW (01:14:07):

<laugh> Yeah. Do my homework for me, please.

EW (01:14:09):

<laugh> Please talk to my manager.

LI (01:14:12):

Yeah. Do not get me wrong. I totally understand those people. Also the engineers who would really want to be more Agile, but are bound by the structures within the company. It is really difficult for me to tell them, "You know, there is only so much you can do without having some power at your company."

(01:14:35):

It feels like this hypothetical manager is resisting, just because they do not like Agile for some reason. That feels like whatever factual arguments we come up with are not going to move the needle, because, what is that saying? You cannot reason somebody out of a position, they did not reason themselves into.

(01:14:59):

So I would love to say something pragmatic about risk management, and speed and all of that, but maybe it boils down to, "Do you really want to be the guy who tackles this last? Do you really want to be the laggard in your company?" And sell some good old FOMO in them. Maybe that is the pragmatic <laugh> people management approach.

CW (01:15:30):

I think there was something we have touched on here, and that has made me feel a little bit better, is there is a cafeteria approach <laugh> to Agile. You pick the things that work best for your company. The goal should be to eliminate artificial rigidity from your processes, so that you are not locked into things that are harming you and making your development process slow, just because they exist as part of your process.

(01:15:52):

Finding those things and eliminating them, and putting in rapid iteration cycles where you can answer questions that are high risk and get them out of the way. That has always been my philosophy with development, is what are the risky things? Let us make sure we can do those, and those work before we get too far along, and it is like, "It is September and we need to ship in October. Oh, by the way, the battery only lasts two minutes, because we did not bother to do a power analysis." Stuff like that.

(01:16:21):

So I think where Agile helps is giving a framework for eliminating rigidity, increasing your ability to answer questions quickly, and to be able to adapt to new situations that come up inevitably when you do development.

JG (01:16:39):

And if the word "Agile" is turning a certain person off, just do not use that word. Substitute "nimble," or just give the specific examples. Like do not say, "Hey, we need to do Agile to reduce risk." Like, "Hey, we need to test this thing to reduce risk. Hey, we need to build this mockup." Use the situation, the specific scenarios. If it is obvious that that is the right thing to do, and you are not couching it in, "We are bringing Agile in," I think the person is much more likely to go for it. Maybe not. Maybe they are just intransigent.

CW (01:17:15):

Yeah. If you say you are going to hire six Scrum masters, then maybe that turns people off.

JG (01:17:19):

And that is usually not the answer.

EW (01:17:21):

I think what you do is you get some custom printed magnets, with the words "risk management," "fast feedback loops," "early customer-" I do not want to use "feedback" again. "Early customer comments," "minimal projects that can be shipped," and you just shake them up and you figure out how to say that. You could put "nimble" in there, but do not put "Agile."

JG (01:17:51):

Okay.

EW (01:17:55):

It is about getting the feedback earlier. It is about not waiting until it is too late. It is getting the feedback fast enough that you can react to it in a reasonable manner, to reduce the risk of the product as a whole. Reduce the risk of your piece of the product, and reduce the whole company's risk, as they work to find a product that can get customer feedback faster. So it is all about little mini milestones, and trying to get things more easily defined.

JG (01:18:37):

I like that.

LI (01:18:38):

Yeah. Which is also always a good idea, is it not? Call it Agile, do not call it Agile.

CW (01:18:42):

Yeah. That is what I have always done. So I just never called it Agile. I mean, I was doing it before Agile existed. <laugh>

JG (01:18:47):

Cool!

LI (01:18:48):

Yeah, I like to say, "I was not there, but I guarantee that the wheel was not invented in a Waterfall process."

CW (01:18:53):

Right. <laugh> But it was used on waterfalls later. Anyway, <laugh>. Yeah. No, I think a lot of the trouble that managers have, when people bring Agile to them, is that unfortunately there has been an Agile industry that has built up and that turns people off. I think when somebody says, "Oh, I want to do Agile," they think, "Okay, we are buying a product. We have to buy the Agile product." That is where people get in trouble. Instead of just, "Here are these principles. We can implement these principles in different ways, but we should adhere to these principles, because they are very helpful for getting us past roadblocks and answering questions."

EW (01:19:33):

Goes back to Matty C's, "Anything that shiny is hiding something under the surface."

CW (01:19:38):

<laugh> Right.

LI (01:19:40):

Oh, of course. There are lots of things hiding under the surface of Agile. I mean, it does have its downsides. One of which is, it is maybe more effective, but it is certainly less efficient. A lot of people are distressed by this loss of efficiency, which is just a thing. But also I think, if you approach a manager and say, "You know what? We should be doing Agile." What you are really saying is, "I am going to take away your approach to managing risk, which you have so far done using contract and specifications and all of that." If you do not give them a good answer on how you are going to manage risk now, then they are quite reasonably concerned.

CW (01:20:26):

Yeah. Okay.

LI (01:20:27):

They say, "Well, there are risks here. How do we address them?" And if you do not give them a good answer, then they are quite right to resist.

CW (01:20:33):

Yeah. If you are just going in, "Let us do Agile," you better have a good story for how that applies to your product, your actual organization, and how you are going to use it. Rather than just, "Here is a book. Let us do it."

EW (01:20:45):

I say do not use the word.

CW (01:20:46):

Yeah. But even if you do not use the word, like Luca is saying, you have to have a story for how what you are proposing is going to solve problems, instead of just being a list of things on a piece of paper.

EW (01:20:57):

Also, you do not need permission to do Agile. These things about fast feedback loops, they do not actually require other people, until you start working at the team and higher levels. You can do fast feedback on your own.

LI (01:21:13):

Yeah. I think the strongest point here, is things like TDD. Who is going to tell you when to write your tests? That is just childish. If you feel like doing TDD or unit tests, go ahead and do it. It is your own personal decision. That said, there is going to come a point where you really need buy-in from the rest of the organization, in order to apply your ideas and your principles to more parts of the overall value stream, of the overall product creation process. You can only go so far on your own.

EW (01:21:47):

Well, I think we are about out of time, so I am going to say, I am Elecia White. I work at Logical Elegance as a consultant. I am the author of O'Reilly's "Making Embedded Systems." And I hope you come listen to us on the Embedded show.

CW (01:22:04):

I am Christopher White. I also am a consultant at Logical Elegance, and I try to do as little as possible.

JG (01:22:14):

<laugh> I will say, I am Jeff Gable. I am a consultant for the medical device industry in embedded software. So I develop embedded software for medical devices, and do all the documentation and everything to get your product through FDA, at least from a software side. So come to me if you want to know more at jeffgable.com. Luca?

LI (01:22:36):

Yeah. And I am Luca Ingianni. You can find me at luca.engineer. So L U C A dot engineer. I promise that is a real URL. I do lots of training, lots of consulting in this entire space of Agile, DevOps, Scaled Agile. Helping your teams to get faster, get better, and enjoy their work more.

EW (01:22:59):

Luca and Jeff do have a podcast of their own, the Agile Embedded podcast. Please check it out.

JG (01:23:05):

Thank you very much.

LI (01:23:06):

Yeah, thanks for reminding me <laugh>.

JG (01:23:08):

Oh yeah, that whole thing. <laugh>.

CW (01:23:11):

Thanks guys. This was super fun.

EW (01:23:14):

Thank you.

JG (01:23:15):

Thanks, Elecia. Thanks Chris.

LI (01:23:17):

Likewise. Thank you so much.