462: Spontaneously High Performing

Transcript from 462: Spontaneously High Performing with Marian Petre, Christopher White, and Elecia White.

EW (00:00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Marian Petre. We are going to talk about how you go from being an expert, to being more of an expert. Or maybe how to go from being a novice, to being an expert. That might be more useful to more people.

CW (00:00:25):

Hi Marian. Welcome.

MP (00:00:26):

Hi. Thank you for inviting me.

EW (00:00:29):

Could you tell us about yourself, as if we met at a Strange Loop conference?

MP (00:00:36):

Okay. Hi, my name is Marian. I am from the Open University. I pick the brains of experts, to try to figure out what makes them expert.

EW (00:00:47):

That is a good elevator speech.

CW (00:00:50):

Do the brains get to stay in the experts?

MP (00:00:55):

<laugh> Yes. Sadly, the brain picking is actually quite indirect.

CW (00:00:59):

All right.

EW (00:01:00):

I like how she prefaced that with, "Sadly."

CW (00:01:02):

<laugh>

EW (00:01:02):

We are going to do lightning round, where we ask you short questions, and we want short answers. And if we are behaving ourselves, we will not ask why and how, and are you sure? At least until we are done with lightning round. Are you ready?

MP (00:01:17):

I am ready.

CW (00:01:18):

What do you think is the best first programming language to learn?

MP (00:01:22):

Any of them. One that does not put you off. There are languages that are designed for people to walk in, without having to get crazy. So one is Scratch by Mark Asdial [Mitchel Resnick] and others, but I think our entry level courses are in Python. I learned BASIC, and then I learned machine language. The answer is, as long as your first language is not your last language-

CW (00:01:53):

I like that.

MP (00:01:54):

It does not necessarily do permanent damage.

CW (00:01:55):

<laugh>

EW (00:01:58):

Which is the best programming language overall?

CW (00:02:02):

Oh, come on!

EW (00:02:03):

I know. That is such a terrible question.

MP (00:02:05):

I decline to answer that one.

EW (00:02:07):

<laugh>

MP (00:02:07):

I will give you an alternative, however, which is that empirically, one of the single best indicators of programming performance, is how many programming languages the developer has touched.

CW (00:02:18):

Huh!

MP (00:02:18):

And so the key is not one language, but many languages. And to learn lessons from all of them.

EW (00:02:26):

And finally, I am glad I learned AWK.

CW (00:02:28):

I have touched many, many programming languages. And the lessons I have learned is, I do not like programming. No!

MP (00:02:33):

<laugh>

CW (00:02:33):

<laugh> Ah, next question. Let us see. Do you like to complete one project, or start a dozen?

MP (00:02:44):

Yes. Oh. I think the answer is both in balance. I try to be sort of focused, active on two projects at a time, so that I can play between them. And then keep a backlog of all the other stuff that is in my head, so I do not lose it.

EW (00:03:02):

Are there any journals or magazines or YouTube channels or conferences, that normal everyday software developers should follow to understand best practices?

MP (00:03:13):

I do not have a good answer to that. The two industry conferences I have been to recently, that have been an absolute joy from a developer's perspective, have been Strange Loop, which sadly has just had its last episode, and Joy of Coding in the Netherlands. I would recommend both of them.

CW (00:03:35):

What is your favorite fictional robot?

MP (00:03:40):

That is hard. Robby the Robot from "Forbidden Planet" was a sort of meme from childhood. We used to call him Blobby Robby.

CW (00:03:48):

<laugh>

MP (00:03:48):

More recently, MEAD, a sort of robot-spaceship cross was entertaining, partly because there is an implied ethos in MEAD. But yeah, I think Robby is kind of the iconic robot image that everybody has.

CW (00:04:16):

What was MEAD from?

MP (00:04:18):

It is a film.

CW (00:04:19):

Oh, okay. I will have to go look for that. I have not seen that one.

EW (00:04:23):

A robot film we have not seen!

CW (00:04:24):

Yeah.

EW (00:04:25):

Cool. Do you have a tip everyone should know?

MP (00:04:29):

I have one from my very wise mother, "Never make a statement, when you could ask a question". It is a piece of advice that has stood me in good stead over 30 years. Well, more than 30 years.

EW (00:04:42):

I am kind of surprised that was not in the form of a question <laugh>.

MP (00:04:47):

<laugh> I know. I know. There is a certain irony to that.

CW (00:04:51):

It is the jeopardy rule of life.

EW (00:04:57):

<laugh> I saw your Strange Loop talk on Greg Wilson's "Never Work in Theory" site. This was the small version, although I have seen the larger one now. It was about how experts think about errors. Could you tell us a little bit about that?

MP (00:05:13):

I am not quite sure what you want to know about it. That talk is one small slice out of decades of research, on what makes experts expert. Greg Wilson and Mike Hoye's image for those talks was a ten minute talk, that would deliver something actionable from research to developers. For me, the attitude to error thing was a really nice nugget to hand across that boundary.

(00:05:47):

It is also incredibly rich. The whole notion is that experts have a very different approach to error when it arises, than say people in software factories. So instead of, "Oh my God, there is a bug. Swat it. Get rid of it." They pause and they look at the error. And they say, "What is that about? Is that as trivial as it seems? Or is it part of an ecosystem, a collection of other things? Is there something else going on, that we have not thought about?"

(00:06:15):

Very often really important insights about the software, come from paying attention to errors. And in a way that fixes the error, not fixes the blame. So it is a very, very open attitude. That embracing error as opportunity, is a really, really useful part of that expert mindset.

EW (00:06:42):

I like that a lot. Figuring out how to help people become experts, is something I have been thinking a lot about lately. How do you take people who are excited and willing to do more and to take classes, and help them get over the hurdle of not even beginner to novice, or novice to junior. But junior to engineer, and engineer to senior. How do you help them become experts?

MP (00:07:20):

Well, I will relate things that I have seen, in terms of the way the high performing teams bring people on side. Actually, first I will tell a story about one of the designers that we studied. I was doing work with my colleague Andre Van Der Hoek at University of California, Irvine.

(00:07:36):

As part of that, we recorded- Or he and one of his PhD students at the time, recorded pairs of designers working together on a design task. In all of the companies they went into for these recordings, they asked for people who were really their best designers, so that we could get sample material for people, for researchers, to look at, to try to understand what was in those dialogues.

(00:08:01):

In one of the cases, one of the designers was incredibly young. He was not the sort of person that you would expect them to have delivered to us as their really high performing designer. And so they stopped afterward and spoke to him and said, "How did you get to be here?" And his whole story was a story of asking questions.

(00:08:26):

Every time there was something to do, he would pick the problem he did not know how to solve. He would find something he had not done before. He would navigate the design space, the problem space, in a different way, because he wanted to be surprised. So every project he worked on, he focused on whatever the design component was that was most crucial. He was trying to sort out the shape of the solution, before he started engaging with coding. He made lots of mistakes and he learned from the mistakes. He sought out open source projects that were in areas he was not familiar with.

(00:09:09):

What he did was he gave himself a huge range of experience, in order to stretch himself and in order to give him a body of material. Not just to engage with and build his knowledge base, although that was certainly part of it, but also to reflect on. So that he could look across the whole of that, and begin to understand what works, what does not, and why. I think that is a big part of it. Is that business of looking for diverse experience, reflecting on it. Thinking hard about what makes something better, what makes one solution better than another solution, or what the trade-offs are.

(00:09:47):

In terms of you helping people. I have always said that- It is kind of a meme, but the education is in the dialogues. There are points of engagement that are really, really important, where somebody is coming to terms with something. It just needs to talk about it to somebody else, needs to get it out of their head, needs to compare their experience to somebody else's experience.

(00:10:11):

So creating an environment in which it is perfectly reasonable to explore. It is valued to learn and experiment. And make mistakes, and then figure out how to fix them. And it is reasonable to have conversations about that, is a rich environment for developing expertise.

CW (00:10:34):

I want to go back to what you were saying about this particular person, who explored different ways of looking at things. Explored different- Walked their way through not being closed into one thing, exploring different things. And what you said about experts, and how you found that experts tend to see bugs or errors as a problem- Not as a problem, as an opportunity.

(00:11:04):

It is kind of a paradox because that sounds like- Forgive me, in Zen Buddhism, there is a thing called "beginner's mind," and it is a thing people talk about. It sounds like maintaining beginner's mind, which is somewhat paradoxical if you say, "Oh, if you are a good expert, you can get into the beginner's mindset."

(00:11:20):

But it sounds like that is sort of what you are talking about, being able to approach things without a lot of judgment to start with and see, "Okay, where does this lead me? What is this telling me? That maybe my years of experience are locking me into a solution. That maybe I am missing something."

MP (00:11:40):

That is a beautiful recap. One of the things that is really interesting that experts do, particularly when they are in a sticky problem and they are struggling to handle all the different constraints, is they relax constraints. Either they simplify the problem, by taking away some of the design features they expect to put in eventually, and they focus on really the heart of the problem. Or they just say things.

(00:12:04):

I literally have sat in a design session, where as part of the discussion, the expert in the room said, "Let us pretend there is no gravity. How does that change what we would do?" It is clear that we have not figured out how to eliminate gravity. But by reducing that constraint, they really broadened the potential solution space, and they got insight into the thing that was hanging them up.

(00:12:30):

That whole sense of the inquiring mind, that whole business of continually re-exploring the nature of the problem, and the understanding of what is needed, is part of that designer mindset that distinguishes these high performers.

EW (00:12:49):

How do you decide who is a high performer?

MP (00:12:54):

<laugh> Well, the algorithm I used for finding experts when I went into organizations to study them, was I would ask everybody who the expert was, and go to that person. That person always pointed to someone else. No one admits- None of these people admits to being an expert, because they are all too aware of their limitations.

(00:13:17):

But the reality is that there is very often one person or a couple of people, who are the people who sit quietly in the room. And then when they open their mouths to ask a question, that question changes the discussion. They are very often people with very deep knowledge, and knowledge that is- I keep talking about "garbage can memories," experts with garbage can memories that are indexed.

(00:13:44):

They can go back in time and understand what they did on previous similar projects, what the constraints were on those projects, how they made the decisions, and then they can reapply them. But they are also the people who are able to see opportunities, to see past methods, to see past imposed constraints, to see past current technological obstacles, to find alternatives.

EW (00:14:12):

One of the things that I found fascinating after graduating in college, was the emergence of the design patterns. Zeitgeist? Gestalt? I do not know. Whatever. Some word I do not really know.

CW (00:14:30):

<laugh> Just going to go through a bunch of German words. < laugh>

EW (00:14:31):

Yeah, elevator. I do not know. It felt like that made people more expert, because they got a wider variety of problems and relatively canned solutions and understanding of where things fit.

MP (00:14:49):

Yep.

EW (00:14:49):

Do people just need to read a couple books to become an expert?

MP (00:14:55):

No. No, no, no, no.

CW (00:14:58):

<laugh>

MP (00:14:58):

I mean, expertise is something that takes time to acquire. It takes time, it takes experience, it takes reflection. The point is that people can step onto the path toward expertise, by adopting an appropriate mindset. And then over time, build up that knowledge base, and build up that experience base, and build up that body of reflection.

(00:15:19):

So the nice thing about patterns, as you say, is that it was encapsulating known solutions to familiar problems, in ways that could be reapplied. It abstracted them. But ideally, if patterns are well expressed, it also gives examples of how it works, where to look for how this is applied. And that is really, really powerful, as long as that does not become the end instead of the means.

(00:15:45):

Patterns as a tool are incredibly powerful. They do allow people to walk through things they might not have thought about themselves, and to consider alternatives that they might not have generated.

EW (00:16:00):

One of the things that I try to convince people to do, is to stop coding and to start thinking.

CW (00:16:07):

I try to convince people to stop coding. Oh, no. You mean not permanent.

EW (00:16:10):

Yeah, your version is never code again.

CW (00:16:12):

Oh, sorry. <laugh>

EW (00:16:13):

My version is think first. And I go through times where I am like, "Okay, write out pseudocode. Okay, write out pictures." But it is all really just, "Do not type until you have thought about it." You brought it up with your expert. How do you...

CW (00:16:27):

There is so much coding without design that happens. I think is what you are saying, right?

EW (00:16:33):

How do we convince people to stop typing <laugh>?

MP (00:16:37):

Well, part of it is cultural. There are huge differences in software development cultures, that actually have a real impact on how people behave. So in going to places like Strange Loop and Joy of Coding, I met all these developers who are reflective practitioners, who are clearly out there trying to learn things, trying to think about things, who were open to conversations.

(00:17:00):

Going into companies is not necessarily the same thing, because in a lot of companies, they are driven by KPIs, key performance indicators. They are driven by how many pull requests you submit, the metrics drive things. That is actually a culture that is really problematic for developing generalist expertise, for developing problem solving expertise, and the design mindset.

(00:17:35):

Because the design mindset is not about those quick fixes, it is about understanding that investment in the underlying issues pays off, in terms of speed of and quality of delivery overall. So it may look like a much bigger upfront investment, but when I talk about high performing teams, I am talking about teams that deliver basically on time, under budget, works first time without lab brown spills. And they do that repeatedly. Part of that has to do with understanding and owning that development process, in a way that is not driven by management indicators. Is actually driven by the engineering needs.

(00:18:26):

So I am very, very sympathetic to the position you are in. I mean, you saying, "Think first." Yes, absolutely. It is very interesting to watch what these high performers do. They certainly think first. They certainly sketch things. They also sit in a corner with a pad of paper and their legs crossed, and wave a pen in the air without writing anything down a lot of the time. But they think hard before they start committing things to code.

(00:18:57):

That does not mean they do not ever sketch, they do not ever make annotations in code. But what these people do is they design solutions. Then they begin to implement those solutions in a pseudocode, that is an amalgam of lots of different notations and lots of different ways of presenting things. When they have got the shape of things worked out, then they move on to code, because that part is easy. That part is pretty straightforward.

(00:19:31):

Sometimes they will just type code, because it is faster. Because they know what they are- I suppose we should be distinguishing between normal solutions and radical solutions, as the literature would have it. There are certain things that are just a very familiar problem. This is another edition of what we already know how to do. We will just do it. We will use a good solution for a known problem.

EW (00:19:51):

Oh, yeah. I am not going to make a diagram for string copy. I know how to write that code.

MP (00:19:56):

That is right, and you just go to that. But for things that are new, they think first, as you say. The ways they articulate- I did some studies about representation for ideas capture. I did things like I wandered around after people. I pulled the envelopes out of their bins that they had been sketching things on. I took pictures of their whiteboards. I kept track of the things that they were writing and drawing, when they were shaping the solution in their dialogues and in their own minds.

(00:20:30):

Those were incredibly diverse representations. It had lots of different things in it. There were little bits of code, but there were also diagrams. There were also bits of analysis. There were also descriptions of things. There were- The code that they wrote might have been in more than one language, because they were borrowing from known elements here or there. And that is pretty typical.

CW (00:20:57):

Do you think this has changed over time? As you described this, I am thinking back to my early experiences as a software developer in the late nineties, early two thousands, where I think I was surrounded by people like this. This is how we did things.

(00:21:09):

There were discussions and we spent a lot of time- I remember when I got assignments to do things, I would spend a month writing documents and stuff on how I was going to approach it, before I wrote any code. And that was encouraged.

(00:21:23):

I do not feel like that, at least in my recent experiences, that that is way things are working most of the time.

EW (00:21:33):

You were hired into a company and surrounded by fantastic experts.

CW (00:21:36):

I know. But I went to different companies after that, and that...

EW (00:21:40):

It all went downhill, did it not? <laugh>

CW (00:21:41):

No, no. Well, yeah. <sigh>

MP (00:21:47):

Over the time that I have been studying developers, the scale and nature of the software they have been building has changed.

CW (00:21:54):

Yeah.

MP (00:21:56):

A lot of what is going on is people are no longer just building a piece of greenfield software. They are building something into a product line, or indeed designing a product line. Or they are borrowing from all sorts of existing software and libraries. They are compiling things, they are composing things. So in some ways, parts of what people are doing has changed in proportion, if not in kind.

(00:22:19):

But in terms of the problem solving, there is a real need to go back and think about the concepts, to think about the nature of the design, to focus on the essence before getting bogged down in the detail.

(00:22:38):

One of the things I do think- In some places I have studied, where they have shifted into some variation of agile practices, they do not always make the design documentation, the functionality documentation, as visible and as prominent as it would have been in traditional teams.

(00:23:04):

They are using a lot of the same methods. I do not actually think there is a disjunction between traditional software development and agile development. I think Agile just highlighted certain effective practices, so that people could adopt them in a coherent way.

(00:23:22):

But there are some interesting questions about where some of those diagrams, sketches, early notes go in that process. They do not necessarily show up on the wall or on the whiteboard. It may be that all of this is still happening, but it is not as publicly visible. Public, I mean within the team. It is not visible to the whole team in the way that it might have been.

(00:23:50):

I also think that the dispersal of developers has had an impact. One of the-

CW (00:24:00):

Geographically or?

MP (00:24:01):

Yes. Physical dispersion. So one of the places I studied for a long time, each developer had a cubicle. Each had an office, but the top half of the walls were glass. And they would use the glass walls as whiteboards.

EW (00:24:18):

Huh.

MP (00:24:20):

So even though they were developing individually, they were working on their component individually, when they sketched something on the whiteboard, people could look across and see what they were doing. I saw a number of dialogues that happened, simply because somebody came running out down the corridor, knocked on the door and said, "I just saw this. Hang on a minute." Or indeed just stood in his or her own office, and drew an alternative. <laugh> And they had a kind of dialogue through the windows.

(00:24:46):

I think it is harder now or it is less spontaneous now, if people are working in separate offices, and they have not found an alternative, a replacement for that kind of impromptu interaction to do that kind of explicit sharing.

EW (00:25:07):

So open offices versus cubicles versus closed door offices. Is there research that says one is better than the other? There is only one right answer here, by the way.

MP (00:25:26):

<laugh> I do not have an answer to that.

EW (00:25:28):

Fair.

CW (00:25:29):

I do believe that that research exists. I do not have it.

MP (00:25:31):

I am sure the research exists.

CW (00:25:31):

I do not have it in front of me, but somebody I know cites it religiously at management, every time they try to institute open offices.

EW (00:25:39):

I know. But then I always wonder about the quality of the studies and all of that. How do you make a good study, that looks at software development? Do you give fake problems? Do you integrate into a team for five years? How do you study these things?

CW (00:25:54):

Yeah.

MP (00:25:55):

Okay, there is not a single, "Here is how you do a good study" answer. <laugh> There are lots of ways to do studies. A lot of the work that I have done, for example, has been me going to teams, and sitting with them and watching what they do. Or alternatively interviewing developers, and then they show me what they do, where they give me, send me, examples of their materials.

(00:26:18):

We also do experiments where we have much more focused questions, and we ask developers to do designated tasks, and then compare across different places. The key to all of it is that you- There is an awful lot- The way that you design the study, depends on the question that you want to answer. So there is not a single- The way that you marry research-

(00:26:44):

Okay. So I do a lot of work with PhD students, teaching them the craft skills of research. One of the fundamentals we have is about research design. We call it the one, two, three of research design. Number one, what question, what is your question? And why does it matter? And what does an answer look like? That is number one. What is the question?

(00:27:06):

Number two, what evidence would satisfy you, in answering that question? And then number three is, what technique would deliver that evidence? So how to design a good study is, figure out what you want to know, and what knowing would look like.

(00:27:23):

So if I want to understand where innovation comes from in development teams, I will probably start by looking at an innovative team, and watching for any instances of innovation. I will probably then take that information, and go talk to all of those developers. Ask them questions about how they perceive innovation, and how they see it arising. And whether the things I have identified, are the things that they would identify.

(00:27:54):

Based on that, I might then go to a number of other companies. I would be looking for things like differently organized companies, differently organized teams, different domains. Because what I am trying to get is a representative sample. But working at that intensity, limits the number of places that you can go to study. So I am not going to be talking about statistically significant results, if I observe five teams in five different companies.

(00:28:26):

So the real answer is that we use a collection of different methods over time, that look at the question from different angles, and using different evidence. Then we reflect across all of that evidence, to try to understand the underlying phenomenon. Once we have understood it well enough, we can articulate what we think is going on. Then we can go test whether that is true, whether we can find anything that contradicts that theory of how things work.

(00:28:58):

But it is not simple. There is a lot of ethnographically informed work, where people are sitting in companies watching things as they happen naturally. There are what we call field studies, where there might be some intervention. Where, for example, we might ask people to do a particular-

(00:29:16):

Well, the example that I gave, where we were looking at pairs of developers solving a particular design brief at the whiteboard. We went and filmed them in their companies, but it was our design brief. We could look at all those pairs, we could look across all the pairs, to see how their behavior was similar and how it differed. But arguably, I think we had about ten videos at the end. That is a very small number to be representative of the whole of software development, or of the range of design styles that are out in industry.

(00:29:55):

It is not necessarily simple. There are a lot of people doing survey work, or doing tasks on things like Mechanical Turk. For that, you need a very, very specific question. You need a pretty good idea of what it is that you are asking about, or you end up with a lot of examples of very weak evidence. And so on, and so on.

(00:30:19):

There are lots of different ways to do it. It actually requires thinking over time. But it also depends what kind of answer you need. Sometimes you just want a finger in the air kind of answer. Is there any reason to think there is a difference between these two methods? Let us have a quick look at them. Or does this work at all? A really simple demonstration of concept might be a very, very limited kind of study. So comes down to that match between what you want to know, and then how you choose to find it out.

EW (00:30:59):

These seem like psychology design studies, as opposed to computer science where you are looking at data.

MP (00:31:07):

Okay, so fundamentally-

EW (00:31:09):

Sorry, that came out really badly, did it not? Where you are looking at data, like that was not data. Where you are looking at numeric data.

MP (00:31:16):

But the thing that you are asking about, if I am talking about the nature of expertise, I am talking about human behavior. One of the reasons that computing is such an interesting domain in which to do research, is because software is limited basically by our imaginations. Whatever we can imagine, we can probably build over time.

(00:31:38):

But the key is the human imagination, is the human ability to effect the designs that are in their minds. And so for me, anything that we do, the software that we use is going to be written by people. Or maybe a collaboration between people and machines.

CW (00:32:06):

Let us just go with people.

EW (00:32:07):

<laugh>

MP (00:32:08):

It is going to be read by people-

EW (00:32:11):

And machines.

MP (00:32:11):

Importantly, and then operated by machines. Ultimately it is going to operate in a human sociotechnical world. There are lots of systems that are very much oriented to technology, but even the ones that seem like the human part of it is irrelevant. It turns out that it is not.

(00:32:41):

For example, one of my informants works in embedded software in the automotive industry. There are examples there where the software worked absolutely to spec, but what it did not take into account- It worked very well with the car. What it forgot was that there was a human driver in the car.

(00:33:07):

For example, there were automations, parts of braking systems, that caused fatalities. Simply because once that automation was invoked, it surprised the driver and the driver was unprepared to what happened with the vehicle. The vehicle is now behaving in a different way.

(00:33:29):

So I do not think that the separation between looking at how people think about software, and how software operates on the machine should be absolute. There is actually a very, very important relationship between them. It may be that different people want to focus on different parts of that arc, but they are intimately related, whether we like it or not.

EW (00:33:54):

You have talked about experts, and about high performing teams. They are not the same, are they?

MP (00:34:03):

No.

EW (00:34:04):

How are they the same and how are they different?

MP (00:34:07):

Typically high performing teams have an expert as part of them, or someone with expertise, but it does not mean that everybody on the team is an expert. One of the things that is really interesting about how these teams operate, is that first of all, they are embedding this designerly mindset in what they do. And they are reinforcing it with the development culture that they have, and the dialogues that they have across the team. What they are doing all the time, is helping everybody to be the best they can be.

(00:34:51):

For example, experts do not make fewer errors than non-experts. They make just as many, if not more. But they have much better safety nets for catching errors. And one of the things that high performing teams do, is included in their culture, practices that mean that there is a really good safety net for error. Because code is not owned by one person, code is owned by the team, and people look across and help each other and do things together.

(00:35:20):

So the likelihood that they will find errors that arise is very, very high. That they will find them early, is much higher. And therefore that they will be able to address things while it is in production, not when it is out in the world.

(00:35:34):

A lot of the business about high performing teams, is having enough expertise, and having that really well-founded culture that embodies this designerly mindset. That embodies reflection, that embodies a learning culture. That includes the kinds of checks and balances, and the kinds of practices, that help them catch errors.

(00:36:02):

But also help them think beyond individual strengths. So one of the things that characterizes high-performing teams, is very often it feels like it is greater than the sum of its parts.

EW (00:36:16):

Christopher recently mentioned something, where you heard that the goal- One of the primary duties of a senior software engineer, should be to create more senior software engineers.

MP (00:36:32):

Yeah. I like that.

EW (00:36:33):

I do not know how many companies I have been in, that that is actually part of the company. Maybe it is because I have done a lot of startups.

CW (00:36:43):

Big companies have that culture, more than small startups. Yeah, I would say that is probably true.

EW (00:36:48):

Startups, they have a job. They do not have a lot of money. They have got to get from here to there. But even big companies, I do not hear that being a goal anymore. They do just as many layoffs as anyone else.

CW (00:37:03):

Well, yeah. I am not sure a layoff is related to- That is usually a higher corporate directive than the development culture of your junior people.

EW (00:37:13):

It is a measure of loyalty.

CW (00:37:14):

Oh, sure.

EW (00:37:16):

Okay. Question. I have an actual question I am getting to. Is it my responsibility to be curious and go to conferences and read books and think about things? Or is it my employer's responsibility to give me the time and tools to do that?

MP (00:37:34):

Again, my answer would be yes.

EW (00:37:37):

Yeah.

MP (00:37:38):

The key is individuals have to want to improve. They have to want to embrace things. But also companies need to be canny about what creates an effective culture, and about how they invest in their people, so that their people can deliver the best products possible.

(00:37:59):

It has been very interesting working with different people. For example, there is one developer I spoke to, who actually wanted to talk to me, because he had been working with the company for a long time, that had a terrific company culture. One where they were very interested in everybody getting better, everybody developing and learning and having opportunities. And then something changed in the company, and it went stale.

EW (00:38:27):

One vice president. <laugh>

MP (00:38:29):

They lost that ethos.

EW (00:38:30):

Yeah.

MP (00:38:31):

So he left, because he was not happy anymore, and he went to a new company. His question to me was, "Okay, I am now in charge of 20 developers. How do I not make that mistake?" That is a really interesting one. There has to be a commitment all the way through to understanding- Let me back off a little bit.

(00:38:57):

One of the things that I was asked at Strange Loop 2022 by a number of the people I talked to, was to investigate what we ended up referring to as "invisible work." The kinds of tasks that developers do, that are really important and that add value to the software, but are not recognized by management, by promotion and so on.

(00:39:21):

I spent the last year interviewing people, and trying to characterize invisible work. And trying to get evidence about the financial gains of addressing invisible work. About making space for things like refactoring code, building tools, learning new methods. Having postmortems on projects to reflect on what worked and what did not, and so on.

(00:39:50):

I think that is part of it, is understanding- One of the things that I heard at Strange Loop this year, was people referring to almost as two different groups, developers and the MBAs. When there is a disjunction between the engineering and the management, then you get this drift into error by proxy, or error by KPI. All sorts of things that do not value the investment activities, that pays off actually surprisingly soon in terms of product improvements. And hence potentially in terms of the bottom line for the company.

(00:40:43):

So there is a dialogue that has to go on. I think that every engineering manager who can straddle- Who understands the engineering, but who can speak management speak, who stands up for invisible work, who makes evident to management who is less well-versed in the engineering, what the benefits are of having a learning culture, of having reflection, of playing with alternatives and so on, they need to do that. Because that information has to pass both ways, so that everybody understands where the value lies.

(00:41:26):

That comes up over and over and over again. Very often- One of my colleagues, Irum Rauf, did some work with freelance developers, to try to understand their attitude to secure coding. The biggest determinant that she found, was whether anybody was willing to pay for them to do secure coding <laugh>.

EW (00:41:46):

<laugh> So true. So, so true.

CW (00:41:50):

As a freelancer, I can confirm your findings. <laugh>

MP (00:41:53):

<laugh>

EW (00:41:56):

I start out with, "It should be secure." And they are like, "But we needed it a little faster." And I am like, "Okay, but it should still be secure." "But we need it to be cheaper." I am like, "No, at this point I cannot help you."

MP (00:42:08):

So the whole point is it behooves us to try to articulate both the cost of not doing that, and the benefits of doing that, in terms of the kinds of outcomes that those clients can hear and understand.

CW (00:42:22):

Okay, I have a question that I have been sitting on for the past 20 minutes, that has been developing in my mind. You are not just studying this stuff for fun.

EW (00:42:31):

Presumably.

CW (00:42:32):

I assume. It is fun, but you are not just doing it out of the goodness of your heart. You develop findings about the way people work, and things that work, things that do not. The properties of high performing teams and high performing people.

(00:42:50):

What I see in software development a lot, and I have seen a lot of companies- That is not necessarily a good thing for me. <laugh> Is that some things trickle out of formal study of software development. They disseminate through people over time, and they become folklore.

(00:43:13):

So in many companies you end up with this culture of, "Well, this is the way we do things." It has this piece from Agile, and this piece from something else, and this piece that I read in a book somewhere. It is this mishmash of folklore, and they develop a culture out of that. Usually it kind of, sort of works, but it is not what you are talking about, in terms of a well-formed, well-considered way of working.

(00:43:36):

How do you- Not you personally, but how does academia- How do these studies bridge the gap between, "Okay, we have this information." How does that get...

EW (00:43:50):

Actionable.

CW (00:43:52):

Not just actionable. How do you convince people to take a look at this, and make changes? How do you get into companies and say, "Hey, look at this is what we found. This kind of office works, this does not. This kind of design works, or design method works, this does not." I just do not see a lot of times that people are paying attention to this stuff. And it bothers me! <laugh>

MP (00:44:20):

Well, that is an interesting one. It is an accurate observation. So it is an irony to me that it has taken me as long as it has, to get into communities like Strange Loop, in order to have that conversation. Because those are the places where there are lots of people with their ears open. And because they are, I am sure that there are numerous high performers there. They are the ones who have the position in their companies, to take the information back and make change.

(00:45:03):

It is interesting because the reason I got invited to Strange Loop, was because of the "It Will Never Work in Theory" live sessions. Greg Wilson, for as long as I have known him, which is decades, has been trying to get research results communicated to industry in ways that industry can hear. The "It Will Never Work in Theory" blog was one of the efforts that he made to do that, was fantastic. But then he felt like people were not reading the blog.

(00:45:35):

So then he went to this ten minute format, "Give me something that is actionable, that you found in your research, in ten minutes." That means make it clear, make it pithy, leave out most of the evidence, do not do the academic thing. <laugh>

CW (00:45:51):

<laugh>

MP (00:45:55):

In terms of my research, that has probably had more traction than anything else that happened in my career. That ten minute video. That is what got me into Strange Loop. That is what got me into Joy of Coding. That is what got me onto your podcast.

(00:46:08):

I think there is a real gap. It is a really hard thing to do. It is not helped because- So I want to make very clear, as an academic, I see myself as a mirror. All I am trying to do is to understand and reflect-

CW (00:46:26):

Yeah, absolutely.

MP (00:46:26):

The nature of expertise that already exists. I am not here to tell anybody how to behave-

CW (00:46:30):

Right.

MP (00:46:30):

Based on what I think. I do not think I am right. I think you guys are right. All I am trying to do is to distill the wisdom that has arisen, out of all of this observation over time. However, there are academics who think they know better.

EW (00:46:51):

Yes, I have met some. <laugh>

MP (00:46:53):

It is not helpful, because what happens then is there is an assumption. So there have been a lot of initiatives that had a lot of good in them, where the initiative failed, because it came with too much evangelism or too much policing.

(00:47:12):

One of the things that I see in high performing teams, is they are always paying attention to new developments, things that are coming, new ideas. It is very rare that they find something in some prototype that an academic has developed, and then want to buy that prototype and use it in their company. What they will do instead is they will say, "Ooh, that is a cool idea. Let us take that idea and reapply that in our work."

(00:47:40):

Unfortunately, academia does not- There are a lot of academics who do not understand that that is a perfectly legitimate form of adoption. I think that there are- Part of the problem is that the language of academia, is very different from the language of industry.

(00:47:54):

The talk that I would give about how experts respond to error in academia, would have to be very different from the one I gave it Strange Loop. The one at Strange Loop focused on concepts and examples, whereas the one in academia would have to concentrate on the evidence base, and on the relationship to existing literature.

(00:48:16):

The forms of- What we really need is more people who-

EW (00:48:23):

Connect.

MP (00:48:24):

Broker that dialogue, who can make the translation step from research to industry, or from industry to research.

(00:48:34):

The other part that is actually quite difficult, is that the timeframes are very different. Industry wants stuff now, very quickly, and academia works at a relatively glacial pace.

CW (00:48:48):

On the flip side, industry can be very set in its ways. Conservative and stubborn about making changes, especially when money is involved. Because I think, taking the open office example that Elecia loves to cite, I think there is tons of research out there that says, "Open offices are terrible for everyone." Everybody I have ever worked with at an open office hates it, and says how they would love to get rid of it. But it is cheap.

(00:49:19):

That is the really tough thing is, "Okay. Yes, here are the best practices. You will do better. You will save money." But over here is somebody with a real estate <laugh> balance sheet, and they cannot make that jump from timeframe, from "If I do this over five years, I will end up better off."

EW (00:49:40):

But you need somebody to explain, "Look, you do not buy plastic screws, because they do not last long. You would not even consider it."

CW (00:49:48):

<laugh> Yeah. I know.

EW (00:49:49):

So do not treat your developers-

CW (00:49:51):

Like a plastic screw.

EW (00:49:51):

Like their cogs. Then they will not leave, and you will not spend 30% of your time interviewing new people, wondering why you cannot hire anyone.

MP (00:50:01):

That is right. So if you can find a relevant piece of evidence, where relevant is determined in terms of their value system-

EW (00:50:10):

Right.

MP (00:50:13):

Right? That is how you make a change in practice. Yeah, you want the statistics that says, "Our turnover rate has increased by this much, since we went to open plan. And look, we lost three of the people who were core to our business."

(00:50:27):

If we can reframe our observations about what works and what does not, in terms of the values of these different stakeholders, that is what we have to do. We need to be able to speak everybody's language.

EW (00:50:43):

Back to the experts talk, which I do understand is only part of your research. You have other books that I probably should be mentioning, and all of that.

MP (00:50:51):

No, that is the one.

EW (00:50:52):

There is the ten minute version on "Never Work in Theory." There is the 45 minute to an hour long one, that is on Strange Loop. I will link both in the show notes.

MP (00:51:05):

Thank you.

EW (00:51:06):

One of the things from the shorter version that really hit me, is something actionable that I could start doing, is pair programming. And the reason- I never- I mean, I have done pair programming in the past. I have done it with people-

MP (00:51:26):

May I pause you?

EW (00:51:27):

Yeah.

MP (00:51:28):

Do you mean pair programming or pair debugging?

EW (00:51:31):

Ah, right.

CW (00:51:31):

Hmm.

EW (00:51:31):

That was actually part of it. Yes. Pair debugging is what you are recommending. I have done both. I have done pair programming and pair debugging with one person, who was remote and basically my contemporary. We had a lot of the same skills, but not a lot of the same knowledge. And we became really good friends, and had a really fun time doing things together.

(00:51:58):

But pair debugging, especially when the skill sets are different, so that the expert has to explain what is happening, and therefore has to articulate it, and therefore has to think about it. And the more junior person is hearing this thought pattern and looking at the code, and probably feeling like they are not contributing anything, by gaining experience both in design and development, as well as implementation.

(00:52:27):

Why does not everybody do this? Why have I not been doing this?

CW (00:52:33):

I love pair debugging. It is fun.

EW (00:52:35):

I know. And yet even you and I who are in the same building, often working on the same project, do not always manage to do it.

MP (00:52:45):

Because of the agile movement, there is a lot of research on pair programming, particularly with student programmers. There are real advantages with students to pair programming, in terms of just the kinds of dialogues that you have articulated.

(00:52:59):

What I see much more often in the teams that I study, is I see very little pair programming. But I see routine pair debugging. I saw that well before Agile was articulated. What they are doing in- There are key things that happen with pair debugging. You have already explained it, that you get the dialogue between somebody who sees further, and somebody who is just handling a bug at the moment.

(00:53:37):

It is a really good way to make sure that, for example, more members of the team are familiar with the codebase. To get people to look across each other's shoulders. To get new perspectives on things. To pick things up that might have been missed, if there is only one person going over and over and over it.

EW (00:53:57):

To start dialogues about other things.

MP (00:54:00):

Yeah. It is a very, very powerful mechanism. As I say, I see it spontaneously in almost all of the high performing teams. In fact there is one company that I studied, where they were using pair debugging as an onboarding process.

EW (00:54:15):

Oh, yeah.

MP (00:54:17):

What they did was they provided selected pull requests to the new person. The pull requests were distributed across the codebase. It meant not only did they troll through the different parts of the codebase, but they also then sat down with somebody else on the team, who was the expert in that part of the codebase, or the most knowledgeable about that part of the codebase. So they met the team, as well as meeting the code.

(00:54:45):

In the course of that, it built their confidence. Because they were doing useful work, while they were also becoming part of this bigger picture. I thought that was a brilliant way, a strategic way, of using pair debugging.

EW (00:55:00):

Some of the features of pair debugging come up with rubber duck debugging, when your secondary person, or maybe primary, is a stuffed animal. But you do not get that knowledge transfer.

MP (00:55:17):

Yeah. So what rubberducking gives you, is it gives you that externalization. Very often just saying something out loud, that you are thinking about, basically causes people to articulate assumptions, to stumble over things that their mind just drifts across, and so on. There is real value in rubberducking. But as you say, what you do not get is the exchange. And you do not get the laughter necessarily.

EW (00:55:46):

Yeah. I think laughing is actually- You asked as part of talking about doing the shows, what would I want someone like you to research? I think that the key is, how important is laughter, amusement, jokes and stories in the development of code?

(00:56:07):

I mean, if I can make my file seem almost like a story, like, "Okay, here is where it begins. This is what it does. This is what you need to know, in my comments." I can it feel like it is an actionable story. I feel like it is better code, in part because it is easier to read.

(00:56:31):

But that is the study I want, is does laughter make for better code?

MP (00:56:38):

Okay. Can I make writing noises now, so I can write that down? <laugh>

EW (00:56:43):

Sure. Giggling and programming, together at last.

MP (00:56:47):

It is interesting to me. I always wanted to bug the coffee machine at the places I was studying, because it was interesting how many insights happened at the place where people got coffee together. They would bump into each other, they would have a few words, they would exchange a joke or something, and they could just ask a question. That happened over and over.

(00:57:08):

Some places have embedded that. For example, I was at Mozilla in Toronto. Their kitchen is amazing. There is so much work that happens, as people pass through the kitchen. Listen to other people's conversations. Chime in here, chime in there, exchange information. It is all very brisk, but it is incredibly powerful.

(00:57:32):

I think part of that is- I had a student named Martha Hause, whose doctoral research was on student teams, but student teams that were spread across the world. In those teams- Again, I do not want to over generalize from students to professionals. But the teams that spent a higher proportion of their early time particularly, socializing and cracking jokes, actually were also the teams that had better outcomes at the end. Because they learned in that time. They built trust, they had awareness of each other, and they found it easier to ask questions.

EW (00:58:16):

That is the part I am absolutely terrible about.

MP (00:58:18):

<laugh>

EW (00:58:18):

I remember it was my second or third job. Oh, HP Labs. So my second, third job. I did not know enough to be useful, and I spent my day at my desk trying to understand what was going on. I skipped lunch, because I was trying so hard to understand. My boss came by and said, "You are doing it wrong. <laugh> You need to go to lunch. You need to understand these people, more than the technology."

(00:58:50):

I was so shocked, because that went against everything I believed in. I just had to learn all of the information. And there was John saying, "No, no. Stop reading the manual, and go have lunch with these people."

MP (00:59:06):

I like John.

EW (00:59:07):

It was weird. It is still not something I am good at.

MP (00:59:12):

But it is part of it. It is part of what you are trying to understand, as you are trying to figure out how the software will serve them.

EW (00:59:18):

Well, they served wasps with my tuna sandwich, when I finally did show up.

CW (00:59:23):

What?

MP (00:59:23):

<laugh>

EW (00:59:27):

You have several books. We are almost out of time, so can you give me a speed rundown of your books?

MP (00:59:36):

The only one I think I would like to give the rundown of is "Software Design Decoded," which is basically 30 years of empirical research, distilled into 66 paragraphs with illustrations. Everything in the book- It is a collection of insights. Everything in the book is grounded in empirical studies. There is a website associated with the book, that has a lot of the bibliography for evidence that we built on.

(01:00:03):

It is a bit of a Marmite book. The people who understand what it is, love it, and the people who actually want a how-to book, hate it. But it does capture a lot of the elements of this design mindset. This collection of knowledge, reflection, and culture, that builds into expertise. It is an interesting one, because it really did take 30 years to get to the point where I could co-author a book like that <laugh>.

EW (01:00:43):

I liked the part about sketching best. About externalizing thoughts, and sketching the problems and the solutions. We have talked some about that.

(01:00:50):

But you also had one about "Experts design elegant abstractions." And the paragraph starts, "While all developers create abstractions, experts design them. A good abstraction makes evident what is important, what it does and how it does it." How do you design a study for that?

MP (01:01:11):

Okay. Well, I certainly have not designed a study to watch somebody create abstractions. That is the kind of emergent observation that happens over time, over lots of studies. Where you collect the examples as they arrive, and then over time makes sense of them.

(01:01:30):

That insight aligns with another insight, which is about "Focusing on the essence." And I know my colleague Andre Van Der Hoek, when he teaches his class on software design, one of the insights from the book that he really stresses with the students, is to focus on the essence. Because it is really easy for people to sit down, and immediately code the bits that they know how to code. Which is almost never the part that is the hard part, or that is the crux of the problem, or is the defining element of the solution.

(01:02:02):

So all of this business about abstractions starts a little earlier. It is about learning to ask the right questions. It is about directing attention to the heart of the problem. What is the essence? What is the core challenge or issue? Are there analogies in other domains? What can we learn from them? What are the dissonances among the alternatives I can think of to address this thing?

(01:02:24):

I have to credit Mary Shaw there, who often talks about insight through attending to dissonance, and its relationship to innovation. So if we have lots of examples or use cases, what do they have in common, or how do they differ? What is most important about them? What is the thing we cannot do without?

(01:02:50):

Very often in the course of stripping back to that essence, experts are identifying the thing that they have to express. And that is the first step. Expressing that as a good abstraction, is something that they learn over time. I do not know that I would know how to teach somebody to make good abstractions. But the whole notion of trying to strip away detail, trying to find the essential bits, trying to ask the right questions, is I think a mindset and a set of practices that people can learn.

EW (01:03:30):

It does seem a little hard to learn from this book. I mean, your expert sounds like a wizard or a perfect person. Some of these things, I think I do them? Maybe I am an expert? Maybe I just think I am. What is it? Krunning Dugger?

CW (01:03:47):

Dunning-Kruger.

EW (01:03:49):

Thanks.

MP (01:03:51):

The response we have had from the people who use the book, is that they use it- First of all, they read it and they recognize some of it.

EW (01:04:02):

Mmm-hmm.

MP (01:04:02):

Sometimes they recognize things and say, "Oh, I used to do that and I forgot about that. Yeah, I had to start doing that again." So they use it as a way of refreshing themselves. Then when they come across something they do not recognize, they say, "What is that about? I need to understand that."

(01:04:15):

In terms of starting from scratch, I think almost everybody that we know who uses the book, will focus on one thing at a time. For example, there is one group that while they were doing their standup meetings, they would pick out one insight per standup meeting. Talk about it a little bit, and hold it in mind. It was a way to do a refresh on ways of thinking. Things that are useful, practices we might have forgotten, insights we might have forgotten.

(01:04:53):

In terms of people starting out and trying to build expertise, again, the book is a means for reflection. You do not have to try to embed 66 things. Someone spoke to me about the book being like 66 mantras, and it was like, "Oh, that is too many mantras." <laugh>

CW (01:05:13):

<laugh>

MP (01:05:13):

That is just too many. The point is, you do not have to do everything at once. You do one thing. When that makes sense to you, and becomes much more part of your practice, you can move to something else.

(01:05:26):

Andre and I are currently trying to do the exposition of the mindset, as a path to learning it. Learning to acquire that mindset. So we are trying to do the how-to book. That is the longer version of this.

EW (01:05:42):

That is tough.

MP (01:05:42):

But we have been at it for ten years. It is going to take a while. <laugh>

(01:05:48):

But the sketching stuff is interesting to me, because I started in this realm because my driving interest was the relationship between language and thought. Computing gave me an incredible arena in which to investigate that relationship, partly because there were artificial languages that we designed ourselves.

(01:06:10):

But I have ended up over the years, not just looking at programming languages and pseudocode, but also at what people sketch. There is a lot of research about the importance of sketching and design. Not just in software design, but across design domains. And there is a lot of research about the value of multiple modalities and creativity. Swapping between the visual and the textual, and so on.

(01:06:36):

It is just very interesting to me to try to- Sketching is one of the insights, one of the ways that we have into the kinds of mental imagery that people are using. The kinds of ways that people are thinking internally about problems. There is not a one-to-one correspondence by any means.

(01:06:53):

But we are always trying to- I have spent a very long time- I have not finished. Trying to draw out what it is that these innovators, these designers are doing in their minds. That allows them to encompass incredibly complex problems, in many cases. To keep all the balls in the air. And to find these lean, elegant ways, routes to a solution.

(01:07:22):

One of the things that was very interesting, I did a study at one point where I asked people to work on a problem they currently had. I just sat and watched them, as they, in most cases, literally sat on a chair with an empty pad of paper and a pen or pencil in their hand, and waved the pencil around in the air and never wrote anything on the piece of paper.

(01:07:44):

I would intervene with questions. I would interrupt them to ask them, I do not know, if they smelled something, or what color it was, or what kind of thing- It was very interesting that very often as I was watching people, I could see them thinking, I could see them sketching in the air. Very often they would drop the pads, say, "Excuse me a minute," run down the corridor to talk to a couple of colleagues.

(01:08:08):

Then they would all stand around a whiteboard, and that designer would start drawing something. Very often it was a conceptual sketch that captured the essence of a solution. That captured the solution to one of the big obstacles in that design space.

(01:08:29):

Very often that sketch became an icon in their design. They would return to the sketch, and they would interrogate the sketch. They would redraw the sketch, and they would challenge the sketch, on a regular basis.

(01:08:42):

This whole business about externalizing thought. We talked about the rubberducking. That is about externalizing thought verbally, but sketching externalizes thought usually visually or in mixed media. And again, it is a way to make things explicit, and to allow other members of the team to interrogate the ideas. And to have really helpful critical dialogues about what is going on. There is a lot of sketching. <laugh>

EW (01:09:21):

Man! It has been really good to chat with you. Do you have any thoughts you would like to leave us with?

MP (01:09:27):

Probably the main one is that if anyone has a topic they want researched, I would invite them to get in touch with me. And if there is a response to any of this, I would love to hear it.

EW (01:09:43):

That is funny because there is a whole section in the outline that I did not get to, with our Patreon listeners asking questions about, "Do coding standards actually increase readability?" And, "Which trends are headed towards unsustainable futures?" And, "Do requirements and specifications really make for projects more likely to ship on time?" So you have already got a bunch of those, but I am sure there will be more.

MP (01:10:09):

<laugh> Well, I would be very interested in a conversation about that stuff. Because they are one of the patterns that happens a lot in terms of tools. Where I am using the word "tool" very, very broadly, in terms of notations, in terms of modeling, in terms of development tools. Is that most tools are built by someone, to solve a problem that person has. Right? Some of the best tools in software development evolved that way.

(01:10:37):

But what happens that ossifies things, that makes them stale or less effective than they could be, is this notion that the tool has to be used in a very particular way. Because one of the things that is a barrier to adoption, is that if your tool has great ideas in it, but it works in a way that does not fit well into the developed development culture that exists in my team, why would I want to change what is working in my team, to adopt the tool? So instead, I will just adopt the idea of the tool. The failure to recognize that that is a really valid form of adoption, is problematic to me.

(01:11:30):

There are lots of examples in terms of things like modeling languages, things like how specification routes and so on. Where what ultimately happens with a lot of the tools that people create, is that once the big adoption hump goes, people will select the parts of that tool that work for them. And continue to use that, and throw the rest away.

(01:11:57):

That is worth really paying attention to. It does not mean that the tool was a failure. Because very often what becomes embedded practice, is still something that was influenced by that tool. But the tool itself may not be the one that they are using.

CW (01:12:12):

Mmm.

MP (01:12:12):

That was true- So UML was a great example of that, where- Another example is formal methods. All of these things are things that carry overheads. What people negotiate within their own practice, is that trade-off between the costs of using it, or the costs of adopting it, and the value it provides.

(01:12:33):

What they are looking for is the sweet spot, where they are getting a sufficient return on investment. So they will use the parts that give them a return, and they will potentially abandon the parts that are not helping them. Unless there is some kind of management structure that demands it.

(01:12:53):

The people who develop the tools, often are not happy about that. But if you look at why are there so many different variations of Agile, we could have a whole conversation about what Agile is that would carry on for an hour. And the answer is, people are selecting parts of the ethos and parts of that set of practices, that work for them. Understanding what works in what context, that is an interesting study.

CW (01:13:31):

Cafeteria Agile.

MP (01:13:32):

<laugh>

EW (01:13:35):

She talks about tools, UML and whatnot, and I am sitting here thinking about Wokwi and Godbolt and VSCode.

CW (01:13:42):

Yeah, those are the indie tools. Those are the edgy-

EW (01:13:47):

<laugh> Our guest has been Marian Petre, Professor Emeritus of the School of Computing and Communications, at the Open University in the United Kingdom. She is also the co-author of "Software Design Decoded: 66 Ways Experts Think."

CW (01:14:02):

Thanks Marian. This was a really interesting conversation.

MP (01:14:05):

Well, thanks both of you. It has been fun.

EW (01:14:08):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener Slack group, for their many suggestions on research that Marian should do. And thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm. We will forward things to Marian, of course.

(01:14:26):

And now I quote to leave you with, from "Software Design Decoded: 66 Ways Experts Think." "Experts solve simpler problems first. Experts do not try to think about everything at once. When faced with a complex problem, experts often solve a simpler problem first, one that addresses the same core issues in a more straightforward manner. In doing so, they can generate candidate solutions that are incomplete, but provide insight for solving the more complex problems that they actually have."