488: Two Slices of Complimentary Bread
Transcript from 488: Two Slices of Complimentary Bread with Adrienne Braganza Tacke, Christopher White, and Elecia White.
EW (00:00:06):
Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Adrienne Braganza Tacke, author of "Looks Good to Me" and "Coding for Kids: The Python version." We are going to be talking about code reviews. No, do not leave yet! I know. I know. We all have some damage here, but let us just see what Adrienne has to say.
CW (00:00:31):
Hi, Adrienne. Welcome.
ABT (00:00:34):
Hi, you two. Thank you for having me on the show.
EW (00:00:36):
Could you tell us about yourself, as if we met on a panel discussing technical writing?
ABT (00:00:44):
Sure. I am a software developer turned accidental developer advocate. I have been in software development for about a decade now. Anywhere from C# to JavaScript.
(00:00:58):
Now I am taking that knowledge and I am trying to decipher developer topics, like code reviews. Really make them more approachable, more fun, and make them come back into the spotlight again. Because even though we think we know these things, we really do not.
(00:01:17):
But the more important thing is that I really, really like pastries and vintage shopping. And I love "Age of Empires II." That is my favorite computer game of all time.
CW (00:01:30):
All right!
EW (00:01:32):
We do this thing what we call "lightning round," where we ask you short questions and we want short answers. If we are behaving ourselves, we will not jump in with our own answers or ask why and how. Are you ready?
ABT (00:01:44):
I am ready.
CW (00:01:46):
Is it easier to read code or write code?
ABT (00:01:49):
Oh. Write code. Definitely.
EW (00:01:52):
What are you going to be for Halloween?
ABT (00:01:54):
Oh, my. I am going to be Adrienne with a job, because I currently do not have a job. <laugh> I was part of the Cisco layoff. So, Adrienne with a job. Let us just say that.
CW (00:02:09):
What is the best thing to see on vacation in Portugal?
ABT (00:02:13):
Ooh. Best thing to see is the Ponte Luis Bridge, from a restaurant called "Muro Do Bacalhau."
EW (00:02:22):
What is your favorite Portuguese pastry?
ABT (00:02:25):
Oh, I mean, pastéis de nata. Hands down. There is a reason it is famous.
CW (00:02:32):
Okay. I need to know what that is.
ABT (00:02:33):
It is a little egg tart in a flaky pastry shell. They are really small. They are like little mini egg pies. Yeah, they are really good. You can eat like 12 of them.
CW (00:02:45):
<laugh>
EW (00:02:45):
Are these like quail eggs?
ABT (00:02:48):
Oh no, they are sweet. They are made out of custard, like pastry cream, inside.
EW (00:02:53):
Is that your favorite pastry of all time?
ABT (00:02:57):
Ahh. That is a tough question.
EW (00:03:00):
I know.
ABT (00:03:00):
It is probably one of them, because it is one of those I could eat a lot of and not get sick of it. So that is kind of my bar there. <laugh> Actually my favorite- No, I will go with that. That is a pastry.
(00:03:16):
I am now veering into desserts, which people get upset, because they are like, "That is not a pastry." So yes, I will say that is one of my favorite pastries of all time.
CW (00:03:25):
Favorite campaign in "Age of Empires II"?
ABT (00:03:31):
Ohh. Ah. I cannot decide. I am sorry. I cannot. All of it.
EW (00:03:34):
Complete one project, or start a dozen?
ABT (00:03:38):
So, idealized Adrienne says, "Complete one project." Real Adrienne says, "Start a dozen."
CW (00:03:46):
If you could teach a college course, what would you want to teach?
ABT (00:03:48):
Oh. Pastry. How to be a pastry chef. Specifically how to create viennoiserie- I can pronounce this incorrectly. It is any pastries that deal with bread or dough. That is what I would like to do.
CW (00:04:04):
Can we just have this show about pastries?
ABT (00:04:05):
Totally can.
EW (00:04:06):
Yes!
CW (00:04:06):
You are going to come back and do a pastry show with us. I have decided.
ABT (00:04:09):
Down. I am down.
EW (00:04:09):
Do you have a tip everyone should know?
ABT (00:04:12):
A tip everybody should know is that our library system is actually pretty awesome.
CW (00:04:19):
Yes!
ABT (00:04:20):
There are a lot of books, even new books, that you can likely find there. So check out your libraries. Because there have been so many books that I have found at the library system.
EW (00:04:31):
And your library will offer you electronic books.
CW (00:04:36):
And movies, and audio books. They have all sorts of stuff. Yes.
EW (00:04:41):
And local community activities. And, I hear that our local library is going to have an art display in November.
ABT (00:04:48):
That is awesome.
EW (00:04:48):
It is all going to be octopuses made out of paper.
ABT (00:04:52):
Even cooler. I am going to go to your library system now. <laugh>
CW (00:04:57):
She is plugging her own art installation.
ABT (00:04:58):
<laugh>
EW (00:05:02):
Okay. Your book is "Looks Good To Me."
CW (00:05:05):
That is the title?
EW (00:05:07):
That is the title.
CW (00:05:08):
Yes. Okay. <laugh>
EW (00:05:09):
That is the title.
CW (00:05:09):
Just do not want any confusion.
EW (00:05:13):
But that phrase means, "I waved my magical wand over it," and "I did not look at it," and "I do not care." And, "Please go back to your own job and quit bothering me, so I can do my own things." Why did you not name it "I do not care. Go away"?
ABT (00:05:28):
<laugh> I think the reason-
EW (00:05:32):
<laugh> Sorry.
CW (00:05:33):
Do we have a relationship with code reviews here, that maybe is biasing us? Okay. Sorry. Go ahead. Yes. <laugh>
ABT (00:05:39):
I think you are the right folks to talk to, because this is exactly why I wrote this book. No, that exact feeling that you have just described is exactly why I wanted to write about code reviews.
(00:05:52):
We are all quite familiar with this now infamous acronym. Or even the emoji, if you use the thumbs up. And it does not look good to me, the process does not look good to me. Everybody knows that- Oh, I do not want to say everybody knows. Most developers have this type of relationship with that phrase.
(00:06:14):
So I wanted to dive deeper into why that is. Why is such a key part of the software development process so hated, so dreaded. Why does it not work? It makes sense logically, but then what causes it to not work as we intend it to? That is really the catalyst for why this book was written.
EW (00:06:38):
Okay. Why does it not work?
ABT (00:06:41):
Do you have two days to talk about that?
EW (00:06:44):
<laugh> 350 pages ish?
ABT (00:06:47):
<laugh> Yes. If you want to really dive deep into it, definitely check out my book. That is exactly what I write about. But also how to fix it.
(00:06:56):
I think one of the largest reasons that it does not work, is that we do not really outline it as a team, in terms of what actually goes on in the process and what we want the process to do.
(00:07:09):
I found that a lot of teams, including my own and several teams that I have been a part of, it is either, "This is the process," and everyone just follows it. Especially if this is their first time being acquainted with a code review process. So they are like, "Oh, this is the code review process," and they take that with them, and think that is how all code review processes should be.
(00:07:34):
Other times they just do not have one at all. So when they do get to a team that has one, they are like, "What is this bottleneck? What is this? This is not normal software development."
(00:07:46):
I think people take their very first experience of what a code review is supposed to be like, and bring that forward with them into their careers. That can be a good or a bad thing.
(00:07:57):
But as we have seen and as I have experienced, most of the time it is a bad thing. Because it is unenforced the way that they think it should be. There is a bias in the system. They are not using the tools that they should be using, and are enforcing the processes the way they want them to be.
(00:08:19):
There are a lot of people problems, a lot of human bottlenecks, that are in the process as well. So out of all of those things, I think they just contribute to a process that is not what we intend it to be.
EW (00:08:37):
One of the things I liked best about your book was Appendix A. I admit I have not read it all. I mean, I have read all of the Appendix A, I have not read all of your book, because I got lost in Appendix A and just wanted that more than anything else. Could you talk about the Team Working Agreement?
ABT (00:08:55):
Absolutely. I think that this document, this "Team Working Agreement" is what we call it. We did not start it. "We" meaning the software development industry. We borrowed it from the project management world.
(00:09:13):
It is this document that is supposed to get everybody on the same page. It is supposed to make implicit expectations explicit. It is such a great thing to use for a team's code review process, because there are so many implicit expectations that we have around the code review process.
(00:09:36):
Like I mentioned, everybody has different experiences with the code review process. So someone may think, "Oh, I can just merge straight to production." Or, "I can approve my own pull requests," if they are using pull requests. While someone else may have come from a more strict process and go, "What are you thinking? That is something you never do!"
(00:09:58):
When you combine all of these different experiences, and different levels of strictness in the code review process, that is where the friction comes about with teams. Because people are not living up to the expectations of their colleagues or their managers, when it comes to the code review process.
(00:10:22):
So this document, I talk about it in the book, where it is tedious, but the first thing you have to do is to actually sit down with your team and talk about these different things. You have to come to an agreement on what those different pieces are about your code review.
(00:10:41):
Some of the things I suggest adding into the Team Working Agreement, which have usually been the cause of a lot of debates or tension, are things like what is your actual process, the steps and the workflow. Because you would be surprised if you asked your team, "Hey, can you tell me what actually happens in our code review process from beginning to end?" A lot of folks have different workflows. They will have different steps.
EW (00:11:11):
<laugh> Yes. One person is like, "I ask people for reviews. Nothing happens. I merge my code." Someone else is like, "You do these five things in order, and if you do not do them, then you cannot pass anything. And that is why my code never gets merged."
ABT (00:11:24):
Exactly.
EW (00:11:25):
Nobody knows the rules. What are we supposed to do with code reviews?
ABT (00:11:29):
Exactly. So that is number one, is getting essentially everybody onto the same page. Another thing is what are your actual responsibilities in terms of author and reviewer?
(00:11:41):
For the reviewer, for example, what are you supposed to focus on? A lot of folks get into really long debates with their authors, because they are leaving feedback on things that the author feels like should not be feedback. Or they leave comments on things that they feel do not need to be addressed. So that is another really good thing to discuss, is what is the responsibility of the reviewer? What should I look for?
(00:12:08):
Alongside of that, what are topics or issues that are blocking versus non-blocking? So if you have differences on what you think is severe enough to stop a pull request from going through, those need to be sorted out. Because that is again another source of debate, of tension.
(00:12:28):
Also a reason why code reviews take so long, is because you just get into these really long debates about who is correct or, "Why are you stopping my code review?" Or it is the other way around, "Oh, this is something so silly. Just merge it through."
(00:12:43):
These are all the gaps that we do not talk about. The things that are implicit, that we think we know and think everybody else on our team knows. But if we take the time to actually talk it out, to work it out, and to at least start with some sort of baseline and put them in this Team Working Agreement, that means you have all agreed to it. That means you have all co-created this document. And that means you all can enforce this in your process.
(00:13:12):
So now instead of, "I think we should block this PR because I think we should do it," it is, "I am blocking this PR because our Team Working Agreement says we should." Now it becomes a more objective thing, versus a subjective thing.
(00:13:28):
I can go on about it. I love this thing too. But the whole point is to make a lot of these implicit expectations and guidelines explicit, so that your team can use them. And actually progress the process forward, rather than get stuck in these debates.
EW (00:13:47):
I do like the idea of blocking versus non-blocking comments. Where blocking is, "No, I do not think you should go on." And non-blocking is, "I am making this comment. I think you should think about it, maybe reconsider what you are doing. But if you opt to continue-"
CW (00:14:05):
"Or follow up in a later PR, or whatever."
EW (00:14:07):
Right. "Go ahead and merge this. We can talk about it later." Personally, I have always used a priority system for my comments. Whether they were bugs or ought tos or maybes or nits or kudos, which are the good things, which I do think you should point out. Do you use a priority system, or is it just blocking or not blocking?
ABT (00:14:32):
A combination. As you were talking about it, that brought to mind another thing I talk about and have used on my teams, and that is- I call them "comment signals," other folks call them "categorizations." There is a person, Paul Slaughter, who created something called "conventional comments." They are all ways to classify the comment.
(00:14:57):
This is exactly what you said. It is a way to prioritize, "What should I look at, as an author?" versus, "What should be safely ignored?" or, "What should not necessarily mean I have to look at this right away?"
(00:15:12):
This is something I bring up, because for me, whenever I receive my pull requests and I see comments, I automatically assume that all of them means work. Every single one means there is something I have to change. So that is-
EW (00:15:31):
And each one is a little tiny stab, either with the...
ABT (00:15:35):
<laugh>
EW (00:15:35):
Yeah.
ABT (00:15:35):
Yes, yes. So part of decreasing the delay in a code review, is for an author to know, "Well, are all of those comments actually all work, like I am assuming?" Or maybe there are actually only two things that require feedback or addressing, rather. The other few are just things, like you said, to look at later on or a nitpick.
(00:16:01):
I think that is really important for reviewers to do, so that if they are leaving feedback, authors can easily categorize the feedback that they receive. Then they can mentally say, "Okay, here are the ones that I need to address right now. Here are the things I need to look at."
(00:16:19):
Versus sifting through all of the comments that may not be labeled or categorized. And then having to determine what needs to be worked on, versus what does not need to be worked on.
(00:16:33):
So yeah, the prioritizations that you were talking about, my team used different ones too. We had "nitpicks," even though I absolutely hate nitpicks. <laugh> But we had one senior that were like a lot of his feedback only fit into a nitpick, so we kept that comment signal.
(00:16:52):
But the other ones we used that worked for our team were "needs change," which it was very clear we needed to work on something. "Small change," that could be updated easily in the PR.
(00:17:02):
"Needs rework." So this one is we are seeing something and there is actually a lot more discussion that needs to be had. Typically we talked offline and said, "There are a lot more things that need to change here."
(00:17:15):
And then we also had "level up," which we added later on. Which is akin to, "This is something that you could take a look at later on. It does not block it right now. But something to look at for the future to improve this code."
(00:17:31):
And then something like "praise," which is similar to kudos. If you saw something really, really awesome, or a really elegant solution, we would use that to tell our colleagues, "Hey, awesome job. This is really, really cool." Or, "I did not know that. I learned something from you today. Thank you for sharing it."
(00:17:52):
Those are the different ways we categorized our comments in our code reviews.
EW (00:17:58):
A lot of people when we give criticism, are taught to use the "ham sandwich method."
CW (00:18:05):
<laugh> I learned it was the "turkey sandwich method."
EW (00:18:07):
Whatever.
CW (00:18:07):
<laugh>
EW (00:18:07):
It is compliment, criticism, compliment. When you are trying to tell somebody that their presentation needs work, you try to start with a, "I really like the material. You should consider not using the Darth Vader voice. But I also really like your shoes." We never do that for code reviews.
CW (00:18:34):
That is because it is supposed to be efficient and cold and bloodless and through electronics.
EW (00:18:38):
We are not robots!
CW (00:18:39):
Yes.
EW (00:18:41):
Not yet.
CW (00:18:41):
<laugh>
EW (00:18:41):
Although many things in code reviews should be done by robots. I do not want to talk about formatting. That is over for me.
CW (00:18:50):
Yeah, yeah.
EW (00:18:54):
Is there a way to convince my team that they do need to highlight the good stuff, as well as the bad?
ABT (00:19:03):
The compliment or criticism, whatever the sandwich. Let us call it "the sandwich." It is a good tactic to use. Not everybody agrees with it, because some folks are just like, "Just tell me. Tell me what it is, because I do not need the buffer of the two slices of complimentary bread to tell me and to prepare me for the criticism." But other folks do. Other folks like it that way.
(00:19:32):
One thing that I think would make comments better, is to just write them in a more objective way. I think a lot of folks have trouble writing objective comments. I think that is where a lot of the feelings of being attacked or critiqued, the feelings of it being bloodless, soulless, that it is just this terrible thing that we have to see everything that we did wrong. That is where it comes from, because the feedback that we get is just written poorly.
(00:20:07):
It is terrible communication. And with that just comes a higher chance for miscommunication, for misinterpretation. If you do not understand each other, author and reviewer, then again it delays the code review. Because now you have to continue talking back and forth to get to the same page.
(00:20:27):
I do dedicate an entire chapter to writing comments, because that is how important it is. There is one tactic in there, I call it the "triple R pattern." It is, I think, a way to help you formulate an objective way to ask somebody to do something, or to give some sort of critique.
(00:20:53):
So that is the request or the critique, if you will. You can use it as well. And that is what is it you are asking the author to do?
(00:21:03):
What is the piece of feedback, the rationale behind it? So, why are you saying this? A lot of people forget to add that part. Sometimes they just give the request or the critique. And then you are left wondering as an author, "Well, this sounds subjective." Or, "Okay. well tell me. Tell me more why."
(00:21:20):
So the rationale is really, really important, because now you are giving a reason as to why you are saying that first part. The important part about the rationale is that these should be objective sources, to support what you are trying to say. So is that the Team Working Agreement that you are citing? Is it a blog post that you have read that is related to what you are trying to say, or supports what you are trying to ask for? Is it a coding convention?
(00:21:47):
Is it something objective? Because if you just say, "I think it should be done this way," that is when people have a problem with it because, "Well, why is your way better? Well, why should I listen to you? What about that is objective enough for me to accept that type of request or critique?"
(00:22:06):
And then the final R there is the result. Some sort of measurable end state that the author can compare their changes to, or try to get them onto the same page of what the reviewer is asking them for.
(00:22:20):
So request, rationale, response or result. Having that be a way for reviewers to compose their comments, I think helps reviewers write it in a more objective way. Rather than just saying, "Oh, you should change this," and then not have any rationale behind it.
(00:22:43):
It is more likely for the author to take that as something subjective rather than objective. That is another thing I say, is that reviewers need to be objective. But I think this would work for critique as well. Having an objective reason for what you are saying, usually tends to have the author accept that feedback much better.
CW (00:23:06):
So you are saying if somebody posted a PR, a lengthy PR, and then got a response back from a very senior reviewer that simply said, "No," that this would not be an appropriate comment.
EW (00:23:19):
<laugh> Christopher is still mad about that. <laugh>
CW (00:23:22):
It did not happen to me personally. I had almost as bad things. But that happened to someone I knew.
ABT (00:23:27):
Using our pattern here-
CW (00:23:29):
<laugh>
ABT (00:23:32):
With very few words, you could say, "Please break up. No. Too long. Make smaller PR." <laugh> So using the three there, I could totally understand. But yes, even a big old, "No," is missing some context.
CW (00:23:51):
I have a question, not to get Marshall McLuhan, <laugh> but when I first started doing development, I was at Cisco actually. Back then the tools, the development tools we had, were email and a very simple bug tracking.
EW (00:24:09):
<laugh> And ClearCase?
CW (00:24:10):
No, this was before ClearCase. This was CVS.
EW (00:24:13):
Oh. Well, if you have to choose between those two... <laugh>
CW (00:24:16):
CVS for source control, and-
ABT (00:24:17):
Oh, my.
CW (00:24:19):
We had our own bug tracking system, which they might still be using. I do not know, probably not. Anyway, the bug tracking system did not have code reviews as part of it. You could not put code into it. It was just, "Here is the title and the-"
EW (00:24:28):
That is the most beautiful thing about GitHub-
CW (00:24:31):
I am not done yet!
EW (00:24:32):
Is how the PRs work.
CW (00:24:34):
This is important. So when you wanted to have a code review, you pulled the diff yourself in CVS, you pasted it into an email. You sent it to your team with, "Code review, please," and the ticket number. And then you put a little description in. People would email you back, reply all, and everyone would have a conversation over that.
(00:24:52):
It does not sound like it, but that feels different to me, than the way modern GitHub process with Jira and code reviews works. It felt more personal. It felt more, "I am having a conversation with my team," than, "I am having a conversation with a robot my team are participating in."
(00:25:11):
I am not saying either one is bad or good. My point is, how much do the tools influence how people review code? And how they- I guess, the attitude they put to it and maybe how they feel about the comments?
(00:25:28):
Because I felt like I was sending an email to my team, and they were responding back to me, and we would have a discussion. Whereas now when I put a PR up, I feel like I am throwing a stone tablet over the wall for people to peck at.
(00:25:42):
I know it is not a big distinction, but I have been thinking about that lately. I think it matters what medium the code reviews are done under.
ABT (00:25:50):
No, you bring up a really great point, and that is the tools do matter. What works for your team is going to differ, team to team.
(00:25:58):
Yes, pull requests, merge requests, change requests. This notion of using a tool to put those code changes up and have comments left on it, that is certainly the more well-known and popular way to do it.
(00:26:15):
But like you have mentioned, that is not the only way to do it. What you have described, this email review, is still being done. It is just not something that I discussed in my book. And it is not something that should be ignored either. Because that works well for other teams, who may like to talk to each other, who feel that they can talk to each other.
(00:26:40):
If it is critique or constructive feedback and the medium of talking via email works for that team, then why should you move over to pull requests or a tool that uses something like that, because everybody else is doing it? That is a really important thing to remember, is that what works for your team is the best thing to use. It does not have to be a tool that uses pull requests.
(00:27:10):
There is another form of that called "patch changes," which I think is very similar. Where everybody discusses a patch change and looks at the code, and discusses it via email as well. Again, if it works for your team. The process that works for your team and that actually achieves the goals that you intended to, whatever those may be in your reviewing mechanism, I believe that is the best one to use.
(00:27:43):
The other process that I also think about- Part of my research of looking at what code reviews used to be like. Michael Fagan and IBM. He has been credited for the first formal inspection process that we do.
(00:27:58):
For the folks who complain about our PR process today, they should go through that. I was reading them like, "Oh my gosh, there is a moderator. There is a full meeting set up. Code is printed out on paper!" <laugh>
CW (00:28:15):
I have done that.
ABT (00:28:15):
Oh my gosh.
CW (00:28:15):
I have had those meetings. Yeah.
ABT (00:28:19):
In my opinion, there is a time and place for that as well. I would love for teams who have only done the PR process, to actually see what that would be.
(00:28:29):
Do a Fagan inspection. Take a look, sit down, have a really formal meeting about it. Discuss just the code changes that you were about to, rather than do it to the pull request. See is this actually a better way to review code? Because now you are face to face it is more synchronous. You get all the questions, or hopefully all the questions, answered and you do it in this type of way.
(00:28:53):
Or do you like the asynchronicity? It is going to depend on the teams. Depending on the team, there are different attributes of different tools and different ways that will work for them.
(00:29:06):
I am really happy that you did bring that up, because it is a very valid way to do code review. I will always argue that if there are multiple eyes looking at code, no matter how you do it, that is way better than not having any type of review at all.
CW (00:29:21):
Yeah.
EW (00:29:22):
What about the reviews, more Fagan-like, where everybody gets in a room and nobody has read the code ahead of time, except for the author? Have you done those? Can we call those as terrible?
ABT (00:29:34):
<laugh> I have not personally done that. What comes to mind is maybe you could do something Amazonian-like, where it is not fully the same. But before every meeting, they do give folks around five minutes at the beginning of each meeting, to read through meeting notes. Or read through a document, about what they are about to discuss in the meeting.
(00:30:01):
I think that is still valid, because if you give the time to the team to all look at it at that same moment, at that time, then you are at least laying some sort of foundation for the rest of the talk. But I would think that having a more asynchronous form, depending on how large your team is, would be a better way to look at that code.
EW (00:30:28):
How long does it take you to do a code review?
CW (00:30:31):
<laugh> That is a broad question.
EW (00:30:33):
Do you want more? I can ask a more detailed question.
ABT (00:30:38):
<laugh> Well, that depends. If it is an ideal pull request that I receive, then it usually takes me around half an hour. Half an hour to 45 minutes to really take a look at it, dig deep into the pull request description, see the code and do what I feel is a proper review.
(00:30:59):
If it is not an ideal PR, I send it back. <laugh> If it is too large, because I am like, "No. I am not going to review this. It is impossible for me to properly review this." Sometimes there are pull requests that just cannot be broken down, despite it being large. Then it takes the whole team to look at it, because one person again is not enough. If it is such a large change, we all need to be apprised of it.
(00:31:30):
But in an ideal state, around 30 to 45 minutes.
EW (00:31:35):
Let me add a few parameters, and maybe the answer is the same.
ABT (00:31:38):
Sure.
EW (00:31:39):
Given it might take you N hours, whatever N is, to write some medium small amount of code, say 250 lines of code. How long would it take you to review code of the same complexity?
(00:31:54):
So you write this piece of code, I do not know, let us call it SPI driver, and you are going to review I2C driver. Those are not equal complexity. Just go with me here. But let us say you are writing a ball of code, takes you N hours. How long does it take you to review it?
(00:32:16):
And remember, in lightning round, you did say it is easier to write code than it is to read it.
CW (00:32:22):
<laugh>
ABT (00:32:24):
That is true.
EW (00:32:25):
You knew that was going to come back and bite you, right? All of those questions are going to come back. <laugh>
CW (00:32:29):
Mostly the pastry ones.
ABT (00:32:34):
That depends, right? Am I reviewing my own code? Then, yeah, it is going to take even shorter because I am vey well acquainted.
EW (00:32:41):
No, no, no. We are not going to-
ABT (00:32:43):
So separate code.
CW (00:32:43):
I get that the question you are asking is, what level of effort, fractional level of effort, should be applied to reviewing, that is applied to writing?
EW (00:32:51):
Yes, but fraction does not have to be less than one.
CW (00:32:54):
Sure. Right, right. And it can be team wide or whatever. Yeah.
ABT (00:32:59):
The obvious answer there is, "Yes, it is going to take a bit more effort to review code." That is also why I said writing code is easier.
(00:33:08):
Because when you review code, assuming we are still in a PR process here, depends on how well was the context and nuance captured in the description? How well is the code actually written? Are there code review comments? Is this something I am familiar with from previous issues or not? Is this a part of the codebase I am familiar with or not?
(00:33:34):
There are so many parameters that definitely influence how well and how long it is going to take me to review that code. The more unknowns there are and the more unfamiliar parts there are, it is going to take longer. So I would argue that the same amount of code being asked to be reviewed of me, would likely take longer.
EW (00:34:05):
Okay. That is really an important thing to say, that I do not think enough people hear. If it takes you an hour to write a blob of code. And a co-worker who is the same as you, you just do not share a history, gives you a blob of code of that same size and complexity. It is going to take you an hour to review it. Do not try to do it in ten minutes. It is not going to work.
CW (00:34:33):
I think that is not necessarily even a thing that developers need to hear, but that needs to be part of-
EW (00:34:39):
Managers need to hear it.
CW (00:34:40):
Right. That is what I-
EW (00:34:40):
Code reviews are not-
CW (00:34:42):
It is not free.
EW (00:34:44):
Free. When I was teaching, I gave students like ten minutes to review each other's assignments and discuss them. I thought this was plenty of time. The assignments were not exactly the same. Everybody implemented their things differently, but they were all doing pretty much the same thing.
(00:35:00):
One of the students asked me to show them what I meant by "review each other's assignments." It was not like I was going to create them, come on! The student, Carrie who asked me this was entirely correct. It took me 40 minutes and the rest of the class to review her assignment.
(00:35:19):
Some of that was because I was talking, answering questions and generally showing off what knowledge I have. But ten minutes was not long enough to look at strange code, even if I knew what the code was supposed to do.
(00:35:33):
How do we take away this mindset of code reviews you are supposed to take 10 to 20 minutes?
ABT (00:35:43):
I do have that guideline. I will say that as a caveat, it is just a guideline that is something to strive towards. But the main goal there is just to have pull requests be smaller. So on one hand there are code changes that can be so small, so atomic, that it would fit into that guideline of 10 to 20 minutes.
EW (00:36:09):
Oh, yeah.
ABT (00:36:09):
But yes, what you are saying, I completely agree, is that we are asking folks to- Maybe they take that guideline and say, "Oh, I need to only spend this much amount of time reviewing," and that is not something that I am advocating for.
(00:36:27):
What I advocate for is for people to give a thorough review, more than just some arbitrary time and what that means. Depending on the code, that could mean a lot longer than 20 minutes. That could mean a lot longer than an hour. And that could mean a lot more folks than just yourself looking at that code.
(00:36:48):
So how do we tell people that this is the case? Hopefully they listen to this podcast and understand that, "Hey, reviewing code is a lot more work, and it is a lot more involved, than writing code."
(00:37:05):
I mean, we have robots that are writing code for us, right? So even we are not doing that anymore. We could conjure up code in a second. So the reviewing is even more important now than just the writing.
(00:37:21):
So I think if we start to tell folks that reviewing is still important, number one. Code review is still important, number two. And the fact that it takes a lot of different people, varying amounts of time, to get to a level of understanding for themselves on the code. To normalize that and to say, "Do not try to rush through the review."
(00:37:49):
One other thing that might cause this is for example, incentivizing wrong metrics for the team. To quickly approve your pull requests. Or how many pull requests are you closing a day.
CW (00:38:01):
Yeah.
ABT (00:38:01):
If we have those sorts of metrics-
EW (00:38:05):
Write myself a new minivan. <laugh>
ABT (00:38:08):
If we have those kinds of metrics, those definitely play a part in why developers might rush through reviews. And again, as I think we all know, any metric that is used can be abused.
EW (00:38:21):
Gamed. Yeah.
ABT (00:38:21):
<laugh> But I think if we just try to normalize, and tell developers that thorough review means really taking some time to go through it and understand it. And that that is okay. That is number one.
(00:38:35):
But number two, also telling tech leads, managers, to encourage that. And to value a proper review, rather than just going through it quickly. To give their team the environment to do that. That is another part that needs to be there.
CW (00:38:54):
Do you think there is a threshold at which- I can imagine a section of code that has many lines. Speaking of metrics, lines of code is a terrible metric. But the complexity of one piece of code of a certain length, versus another piece of code of a certain length, it can be quite different.
EW (00:39:12):
Sure.
CW (00:39:12):
One might have an algorithm. One might have complicated mathematical things happening in it. One might be a string parser. Do you think there is a line at which we should not be doing a code review? This should be a design review?
ABT (00:39:28):
Yes. And I think that those should come much earlier in the process. If we think about what I think a normal workflow would be, usually you plan and you design what this feature or this fix might be, and you start to write that code. And then for most people, they do not have any other type of check in place or review in place. The next check or even final check or only check that might come, might be the code review process.
(00:40:00):
But yes, if it is something that is integral to the program, if it is something that is a bit more complex, I would argue that- Especially if it is something that a lot of implementations might be available. It is worth discussing certainly, in something like a design review, to actually get to an agreement before the code review.
(00:40:25):
Because then if you are only discussing and reviewing for the design, at the code review process, it is too late.
EW (00:40:33):
Yeah.
ABT (00:40:34):
And if you decide, "Oh, there was a better way to implement it," or "Oh, there was an edge case that was missed," well, the code is already written and now what do you do? You have wasted all that time, if you do decide to go with the other implementation. Or the developer who wrote it is definitely going to feel upset, because they have spent all that time writing it, and now they have to start over.
(00:40:58):
All of those things should be caught much earlier. Something like design should be discussed much earlier. Way earlier than the code review.
EW (00:41:09):
Do you ever see a PR and realize, "Wait, I do not want to do a PR here with my GitHub system. What we need is actually to take a step back"? Either have the author go through the code, or do the design review we should have done. Where do you pull the brake lever and say, "The PR is not what we should be doing right now"?
ABT (00:41:40):
In an idealized world, we could pull it and do it over, but it is never an idealized situation. It depends on the urgency of the code that is needed. If this was something that we had planned, and we promised would be delivered at some sort of deadline, and we have reached that part and we are already there, it is a balancing act.
(00:42:09):
That is another thing that developers have to do is, "Okay, there is so much we can do to be proactive, and to make sure things are as good as they can be before they are out. But there is also the business value that we have to deliver, and the needs of the customers that need to be out there."
(00:42:31):
You cannot spend all the time of your career crafting this one perfect solution, and never deploy it out.
EW (00:42:39):
<laugh>
ABT (00:42:39):
Because that is not likely what the business needs, nor what you were hired for.
(00:42:45):
But it depends on that type of urgency and what type of feature that is. If it is something that is super new- For example, that happened to me at one time. We promised one new feature is going to really, really help a couple customers. In the code review process, we did find that we missed a couple edge cases.
(00:43:04):
So we really took the time to say, "Okay, if we spent one more sprint on this, this is going to be better in the long run. This is also not going to cause this potential edge case to happen, especially if we got a new client onboarded and they had to use this feature." Sometimes it is able to be delayed in that way. We make the change and we do that, address the edge case, and it is good to go.
(00:43:33):
Sometimes it is a little bit more clear and you are like, "Nope. This customer has been asking for this for three months. <laugh> They have been waiting for it. We promised it. The CEO's name is on here. We need to deploy no matter what." The value of having it out, even if it is a little bit buggy, outweighs the time delay that it would take to fix it and make it perfect.
(00:44:02):
If it did require more than just a simple fix, like a day delay or something like that. If it did take a whole sprint or two to fix, we would choose to deploy that out right away. Because the benefit outweighed the time delay, to make it better, perfect, idealized developer code.
(00:44:26):
These are the choices that are going to differ team to team. Some will really be stringent about it and say, "No, this is going to cause more work for us in the future." But this is where teams need to align and say, "Where is that line for us? What matters for us? What are we okay with? If we do go through and deploy something out that might need work later on, do we have mechanisms in place to make sure that does not get lost? That it does not go to the backlog, and it does not become technical debt?"
(00:45:01):
Those are the other things that are, I guess, outside the scope of just the code review itself, that a team should be thinking about. And should have processes in place to address, so that these types of things do not occur. Meaning technical debt, or a buggy code is just left out there in the wild.
EW (00:45:23):
What I feel like is related, your book started with a story about a guy writing a feature quickly and then going off on vacation. His co-workers cannot figure out the code, or fix the bugs for the big important demo. And then the story ends. As I said, I am not finished with your book. Do we ever get back to the story?
ABT (00:45:43):
Ooh. See, these are the types of things that now I wish that it was not about to be published, because no, we do not get back to the story. <laugh>
EW (00:45:54):
<laugh> Okay.
ABT (00:45:54):
But I think the story was there to lead you into the book. To say, "If you do not want to be in this situation, continue reading the rest of my book, so that this never happens to you," is more of that type of, "Come join me on this journey in this book."
(00:46:12):
But that would have been actually a really cool thing to do, is to end and finish that story at the end. Maybe I can tell my publisher to add that part, and then we will get this book in 2026.
EW (00:46:25):
I found it hard, because it sounded like it was a short feature. Like the guy going on vacation whipped it out in less than a couple of days. So I was not sure how code reviews would have helped.
ABT (00:46:43):
I can see that point. Yes, it is something completely new. It was a larger feature and it was something that none of the team was familiar with. I think the point I wanted to make there was just, this has happened before to me, where-
EW (00:47:00):
It happens to everybody <laugh>.
ABT (00:47:01):
Yeah, yeah. There is just the rockstar engineer who decided to spit something out over the weekend, or over a couple days, in isolation. Just had it ready to go there, or merged it straight in, even worse.
(00:47:19):
While having a code review may not have caught everything, it certainly in my experience and in the experience I am talking about, would have at least given us a way to stop that code from going through.
CW (00:47:37):
Mm-Hmm. Okay.
ABT (00:47:37):
And to have a lot of our team at least be aware, "Hey, there is this new chunk of code that is going to cause us some headaches and overtime. And at least give us a chance to take a look at it."
(00:47:51):
This comes from one of the main principles that I really believe in. That is the more eyes that are on code, the better. That story was ideally setting the stage for that in this book.
EW (00:48:07):
Was getting the book edited, like having your code reviewed?
ABT (00:48:17):
Absolutely. It was very close. In fact, when I talked to people who are unfamiliar with code reviews or software development, I talk about the editing process as a proxy, because it is pretty much the same thing.
(00:48:31):
It was like code reviews, because I spent a lot of time writing these chapters, writing these examples, adding anecdotes. It is still something that I have conjured up myself. So when it goes to reviewers and it goes to my editor, you see a bunch of lines here to make it more succinct. You see some comments there that says, "This does not flow." You see feedback that says, "This is unclear. Something is missing here."
(00:49:07):
It absolutely is like the code review process, because there are pieces of feedback that made my book a lot better. I know I talk about this in the book where I say, "Do not let code reviews affect you and who you are as a developer."
EW (00:49:25):
<laugh>
ABT (00:49:25):
When I write this book, I always say it is a little different. But it is still hard to see some of the pieces of feedback.
(00:49:35):
Now, granted, there were other pieces of feedback that were not as nice, just like in code review. Some folks were like, "Really? Do we really need to add this section here?" Or, "This is superfluous," and not tell me why. So I had that same mindset on of, "I am open to your feedback. But if you just give me this harsh piece of critique, without telling me why, I am less likely to listen to it."
(00:49:59):
There were other folks who had great feedback. They gave me a critique and they said, "I actually-" For one example was, "I was reading this chapter and I was reading this particular section. This to me felt like it would be better in chapter three. That would fill the gap that was in chapter three, that I was expecting to read, after reading the intro of chapter three."
(00:50:28):
So it was a very in-depth comment and explanation of why they were giving me this feedback. If I had just gotten something that says, "I think this would be better in chapter three," "Okay. Okay, bud. Sure." That is how I would accept that type of feedback without any rationale, without any explanation. It sounds a bit more subjective.
(00:50:51):
It is the same in code reviews. If you do not give an explanation, or at least come from an objective background, when you tell your feedback, then it is less likely to be accepted by the author.
(00:51:05):
So absolutely. Editing was very much like code reviews.
EW (00:51:11):
What made you write a book? Write this book?
ABT (00:51:16):
I think I again got lucky. I get lucky a lot. Manning reached out to me. I did not even set out to write anything. It is not like I pitched this. Somebody just reached out to me and said, "Hey, we have seen some of your conference talks, and some of your work. We wanted to know, if you were to write a book with Manning, is there any topic you have that you think you could write a few hundred pages about?"
(00:51:47):
And I am like, "Yes! Code reviews." The opportunity came up. Now I am actually very thankful and very happy that it did, because I hope this book becomes the code review book. I have spent a lot of time on it. It is a labor of love. I really just want it to improve code reviews everywhere. So, yeah.
EW (00:52:15):
One of the things that I liked about the book, was the areas that felt a little bit like a workbook. The Team Working Agreement in the appendix, where it asks you to list the prioritized goals for the review process, because not everybody is in it just to find bugs. Sometimes it is more about the mentorship, or the learning transfer.
(00:52:38):
But also the list of responsibilities for the author, and for the reviewers. It was just nice to have checkboxes I could check off. I wanted to be able to do more on that side. Basically I want to turn your book into a workbook. Which is not a useful question at all. What do you think of that? There we go.
ABT (00:53:10):
No, you are on the right track. As I would receive feedback from my reviewers- I had three major review cycles. One was after the first four chapters were written, one was at halfway point, and one was at the end of the final manuscript. I received similar feedback, where they said they liked the interaction. They liked the posing of questions.
(00:53:36):
And they liked the parts that they could take back to their team, and either start a discussion with or fill out with their team. Before that feedback, there were not many opportunities to fill things out yourself, or do things in a more interactive way.
(00:53:57):
After I got that feedback, that is why I have the PR template examples there. That is why I have the emergency playbook starter. All of these things I did not even think of. I was in the mindset of just, "Here is what I need to share with folks." But I never thought about actually giving them something to take and go forth after being inspired by the book, let us say.
(00:54:25):
That was one part, which certainly improved the book a lot. I am also now thinking about how to actually turn this into a workshop, because I also present the best parts of this book in a conference talk.
(00:54:39):
A lot of developers come to me and say, "Do you think this is something that you could talk to my team about?" Or, "How would you orchestrate starting this conversation, and getting us to that foundational point with our code review?"
(00:55:00):
One of the parts that I think would make a great workshop, would actually be the formation of the Team Working Agreement. And part of chapter three, where you talk about and discuss your code review process.
(00:55:12):
So one part is actually building it if you do not have any at all. The other one that I think would apply to most teams, are you have a process in place, but there are lots of gaps, or people do not know all the steps, or people are not happy with it. So in chapter three, I talk about how to go through it, how to find those gaps or find the weaknesses in your process, and then start to address those things and change your workflow.
(00:55:42):
Those are two ideas I have had for workshops. I have not pitched them yet. They are very much in the high level stages. But you are on the right track and the appetite is there. So you saying that is certainly making me feel like I need to think about that more, to start making some workshops out of those particular portions.
EW (00:56:06):
One of the lists was the list of reviewer responsibilities. Before we started this show, you asked if things were different in the embedded world. Some, but not as much as you would think really. But your list included compiling and testing the code.
CW (00:56:27):
Oh.
EW (00:56:30):
<laugh> Christopher is like, "Oh, I know where she is going now." That part is not always easy, especially if you are doing a bug fix on say, a smartwatch that only happens at midnight or whatever nonsense.
(00:56:41):
I was actually a little surprised to see compiling and testing as part of their reviewer responsibilities. I totally expected to see them as part of the author responsibilities. Do you always do that? I really do not, unless it is requested, or I see something I do not understand. Or believe there is a bug and I want to test it myself, so that I can make good review comments.
ABT (00:57:05):
Not always, but certainly when the process warrants it. This came about because there was a different team I worked on. It was a smaller team, and we did not have a lot of automations in place.
(00:57:19):
I talk about how the idealized state is, yes, the code review process is one part of it, but it is also a complimentary part to a larger continuous integration pipeline. A lot of the checks that we would have, would be run in an automated way, in the later parts of the pipeline.
(00:57:44):
But because a lot of folks use GitHub or GitLab, sometimes those checks do start happening closer to the code review process. Sometimes you can actually run pre-build checks, or you can run some checks maybe at the same time as when the pull request is opened.
(00:58:07):
But to go back to my small team, we did not have any of that. We did not have any automations in place. We did not have any of the things that make our lives easier as reviewers. So there were a lot more responsibilities for us, to make sure that what we were reviewing would pass, and be okay to merge into the rest of the codebase.
(00:58:31):
So part of that team was for us to actually pull down those versions, and test it ourselves in our environments, and make sure that everything was okay. That was instead of having say a staging environment. Or even some folks have an automated way to get that code, run it, compile it, build it, deploy it to a test environment, for it to be checked and do a report and make sure those things are working. Or even have some automated tests run against that, and just get a nice report to say, "Hey, yeah. It is good. Everything is good."
(00:59:12):
We needed to do those parts ourselves. We needed to take down that code, compile it ourselves, and do those manual checks ourselves. That just came out of necessity.
(00:59:26):
I added that to the reviewer's list, because there are a lot of teams that I have spoken to, that are small teams that are not at the idealized state at all. Talking about using a code review process, or even going through the PR process through GitHub, was still something brand new to them. That was still something that was shiny and just being integrated into their process, and is something that was a start, a stepping stone, to making their process better.
(01:00:01):
So to add something like that, to pull it down yourself and test the code yourself, was to me not something too weird to add into the reviewer process. Hopefully I have described in that section that it is not always necessary.
(01:00:19):
But the more important thing is that our responsibility as reviewers, is to make sure we have done a thorough review. If a thorough review means pulling down the code, compiling it and running it ourselves, then that may be something that just has to be done, on that specific team.
EW (01:00:38):
This is part of the Team Agreement, is whether or not this is something you do. Whether or not this is something you do for all bugs, or all bugs that are not typos. You get to choose, and you get to agree, on what the goals are and what the process is.
ABT (01:00:57):
Mm-hmm.
EW (01:00:57):
I am trying to put in a code review process, for a new team that I am working with. They are just familiar with design reviews, so they want the process. But it does have some downsides. It takes more time. I bill hourly, so time is expensive to them, and I do not blame them for wanting to maximize my output and maybe undervalue the review.
(01:01:36):
But the way that I am starting, is to break up your <laugh> Team Working Agreement, and then get them to agree on each part individually. And once we are all done, I will present it to them as finished! By the way, if you are listening, this is totally not manipulation.
ABT (01:01:53):
<laugh>
EW (01:01:53):
But the first thing I was going to do, was go through the prioritized goal for the review process. You list some possible goals. Finding bugs, codebase stability and maintainability, knowledge transfer and sharing, mentoring, and record-keeping and chronicling.
(01:02:15):
I did an informal poll on the Embedded Patreon Slack. Finding bugs was the second most important. Codebase stability and maintainability I think was the most important. And record-keeping and chronicling was the least. Is that how it goes for most teams?
ABT (01:02:36):
Yes. That is part of why I added it as a distinct goal. Your poll is very, very close to my own experience as well. It is almost the obvious goals to have, and why you are doing a code review in the first place. So that is great. That makes me happy to hear that those are the top two goals.
(01:03:02):
record-keeping and chronicling. I can understand why it is lower on the list. Some folks have noted that they have other systems in place, to have that type of chronological record. For example, maybe their ticket systems, or their project management systems, or some other external tool.
(01:03:26):
Where all the additional detail, and context, and all the things that we would want to know, and what I am advocating you add into the pull requests, they keep that somewhere else. That is possibly why they abstract that away from the code review process itself, and think that it is not a goal.
(01:03:49):
Another reason is a lot of folks who may not do reviews this way, maybe they do reviews in say pair programming or mob programming. They feel like they get that much faster review cycle there, and the goal for the record-keeping is not necessary, because they have already shared that knowledge across the members. They have shared it either with a pair or the whole team or the mob, and that it is unnecessary.
(01:04:22):
That is where I say it is actually quite necessary. Because even though that is cited as an argument to say, "Well, we do not even need code reviews at all. We already do way better, way faster, way more in depth reviews, during pairing or mob programming. That, why would we even need to do a formal process at all?"
(01:04:45):
I agree that you may not need as thorough of a process, as if you were not doing pair programming or mob programming. But the record-keeping part is actually the main reason why I think you should still do it. I argue that if you do engage in pair programming or mob programming, that you can just do a more shortened version of a code review.
(01:05:13):
It can be now just a- I do not want to say, "Rubber stamp," because now people are going to misinterpret and be like, "Oh, Adrienne said you could just rubber stamp a pull request." But what I mean is the process itself may not have to be as thorough and as long, as if you are not doing pair or mob programming.
(01:05:32):
Having a type of record, especially if do not keep any of that context or decision-making anywhere else, I would argue then a code review is still absolutely necessary. And is probably the only place where you would keep all of that decision-making in context, that was discussed in pair or mob programming.
EW (01:05:54):
I have done pair programming. With the right pair, I adore it. Mob programming sounds interesting, but I think there are enough voices in my head that I do not really want that. I have to think all of the times that I write code and then I go away, and then the next morning I clean up and fix it. You could not do that as a mob. I think you would need more time.
(01:06:23):
So I guess I am curious about mob programming. But that is an entirely separate show, and we are nearly out of time.
ABT (01:06:31):
Yes, yes. There is so much more. We could talk about that-
EW (01:06:34):
Pastries.
ABT (01:06:36):
Yes! Let us do that. Let us do that as a second episode.
CW (01:06:38):
Pastry reviews.
ABT (01:06:40):
Third episode.
EW (01:06:40):
Pastry reviews.
CW (01:06:42):
We will all buy a set of pastries, and then review them. <laugh>
ABT (01:06:48):
See, that is an award-winning show, right there.
EW (01:06:51):
And instead of robot or not, we will have pastry or not. Is a donut a pastry?
CW (01:06:55):
Dessert or not?
ABT (01:06:57):
Oh, geez. Yes. See this is where when you first asked me, "What is your favorite pastry?" my mind immediately went to something that people consider a dessert and I am like, "Well-"
EW (01:07:07):
What was it?
ABT (01:07:07):
"Pastries can-" So my favorite type is- It is like this- I do not even know what you call it. I had it in one restaurant. I do not remember the name. It is like a very soft cake-like- Warm vanilla cake-like thing, with some vanilla ice cream on top.
CW (01:07:27):
Sounds almost like bread pudding, but with ice cream.
EW (01:07:29):
Sounds like cake!
CW (01:07:30):
Oh, yeah. <laugh>
ABT (01:07:30):
It is not as mushy as bread pudding. It is not just as straightforward as cake. The hot component with the cold component is very important. That is a big part of why I like it. But I do not know. I need to find that place again. The consistency of it was very hard to describe, because it is like right there, right in the middle.
CW (01:07:54):
Hmm.
EW (01:07:56):
Well, let us know when you remember where you found it. I am always in for a good dessert. But I do not think that is a pastry. <laugh>
ABT (01:08:04):
Yes, I have gone through this conversation. That is why I am like, "Well. No. Most people will not consider that a pastry."
CW (01:08:11):
Is birthday cake ice cream a cake? Hmm.
EW (01:08:11):
<laugh>
ABT (01:08:14):
Is a hot dog a sandwich?
EW (01:08:15):
<sigh>
CW (01:08:18):
It is, if you slice the bun at the bottom.
ABT (01:08:22):
<laugh> Well, there you go. There we go.
EW (01:08:25):
Adrienne, do you have any thoughts you would like to leave us with?
ABT (01:08:30):
Yes. Just the fact that you are listening about code reviews is a really, really good start. More than that, just I think if everybody cared, not just in code reviews, not just in software development, but in everything. If we all just cared a little bit more in the things that we do, I think we would all benefit from it. So take the time to care just a little bit more than usual.
EW (01:08:57):
Our guest has been Adrienne Braganza Tacke, author of "Looks Good to Me: Constructive Code Reviews." It is out in electronic copies. You can find it at manning.com or you can get a paper copy from Amazon in a few months.
(01:09:12):
We do have a few to give away. And as it is a Memfault supported show, email us and tell us what you like about Memfault. Do that by November 15th, and we will send-
CW (01:09:31):
Code.
EW (01:09:31):
Randomly chosen three people, a code for a book.
CW (01:09:35):
Digital book.
EW (01:09:36):
Digital book, digital book.
CW (01:09:39):
Thanks Adrienne.
ABT (01:09:39):
Lovely. Thanks.
EW (01:09:39):
Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener Slack group for their questions. I did not call you by name, but I did try to use your questions this week. Thank you to Memfault for sponsoring the show. And thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.
(01:09:59):
Now a quote to leave you with, from President Gloria Macapagal Arroyo. "The power of one, if fearless and focused is formidable, but the power of many working together is better."