502: Chat, J'ai Peté!
Transcript from 502: Chat, J'ai Peté! with Christopher White and Elecia White.
EW (00:00:06):
Hello and welcome to Embedded. I am Elecia White, here with Christopher White. This week we are going to talk about, oh gosh, all kinds of things. Where do you want to start?
CW (00:00:19):
I do not know. What do you want for lunch?
EW (00:00:20):
That is a longer discussion than we have time for.
CW (00:00:22):
Okay.
EW (00:00:22):
What do you think about "Murderbot"?
CW (00:00:25):
I think it is a good series of books, and so far is a faithful rendition onto television. I appreciate how they have incorporated the internal dialogue, in a way that makes sense. That was very boring. I am sorry.
EW (00:00:43):
It is funny, the books are "Murderbot Diaries," and they are all from its perspective.
CW (00:00:49):
By Martha Wells, who we have had on the show, if you want to go find that episode.
EW (00:00:53):
Which was really fun. Clearly podcast abuse of power.
CW (00:00:57):
Eh.
EW (00:01:00):
The show is "Murderbot." There is a difference between "Murderbot Diaries" and "Murderbot." The focus is less on its perspective, although it still has a strong voice.
CW (00:01:15):
It is still pretty central. But yes, there have been changes, because you have to change when you are changing media. Mediums? Media. When you are changing art forms? What is the word I am looking for here?
EW (00:01:26):
"Mediums."
CW (00:01:27):
Yeah.
EW (00:01:28):
"Media" is the plural of "medium."
CW (00:01:30):
I know! It is just confusing. Yeah. Obviously there are always changes when you adapt something. I think this particular story, novel, whatever, is- I would not say easier to adapt, but it is more aligned with adapting to television, because the way she wrote it. They are shorter. They tend to be novellas.
EW (00:01:51):
Right. Most of the "Murderbot Diaries" books are novellas.
CW (00:01:54):
It is probably easier for them to adapt a novella into a series of television, than say "Wheel of Time" or something, which are many thousand page books. Right? There is a lot more compression that goes- I do not feel like there is a ton of compression happening with this. I feel like I remember almost all the beats that are happening in the TV show, so I think they have done a good job with that. What do you think of the casting?
EW (00:02:18):
I know I am supposed to say, "How dare they cast a white man as Murderbot." I do not know. I am okay with it. I love the casting of the Preservation folks. They are so perfect.
CW (00:02:38):
Yeah.
EW (00:02:38):
They are just so perfect.
CW (00:02:43):
Mensah in particular is exactly how I imagined her, so that was kind of interesting. Yeah, they have gone out of their way to address the itness of Murderbot. <laugh> Explicitly in some cases. So I think-
EW (00:03:00):
Did not need a full frontal.
CW (00:03:03):
I mean, since you are casting something that does not exist, it is not like- Yeah, it is a human cyborg construct thing.
EW (00:03:16):
But the thing is, we are not objective.
CW (00:03:18):
Right. Yeah. We like the series.
EW (00:03:20):
We like the series. We are prone to liking many of the Apple TV productions. This is like when episode nine came out. No. <laugh> Episode one.
CW (00:03:38):
Star Wars?
EW (00:03:38):
Of Star Wars came out. It was like, "You know, I do not like Jar Jar Binks, but honestly, he could just read the intergalactic telephone book, and I would be fine."
CW (00:03:47):
It was George Lucas reading the intergalactic telephone book. I do not want to listen to Jar Jar <laugh> reading the intergalactic telephone book.
EW (00:03:53):
There is some level of, "I am just so happy it is happening."
CW (00:03:54):
Yeah.
EW (00:03:56):
That-
CW (00:03:57):
Which fades in time. But this is not- I do not think this is one of those situations. It is pretty good.
EW (00:04:01):
I think this is pretty good. Yes.
CW (00:04:03):
Interestingly, the episodes are quite short for- They are keeping them to the 25 minute sitcom length. It is not a sitcom, although it has sitcommy elements.
EW (00:04:15):
But no laugh track.
CW (00:04:16):
Yeah. But they are short, which is weird. Streaming television these days tends to be, "We will make the episodes as long or as short as they need to be, for the particular chunk of the story we are telling right now." So you get shows that have hour long episodes, and then the next one might be a half an hour, or 35, or 45.
EW (00:04:32):
These are all pretty snappy.
CW (00:04:32):
These are all pretty snappy. It is interesting. I think it will be-
EW (00:04:35):
When the last one ended. I was like, "No. That was just the pre-show pre-credits."
CW (00:04:39):
And they are releasing them weekly, which some streaming platforms do, and some do not.
EW (00:04:45):
Oh. I would binge it. I would watch every single one of them.
CW (00:04:48):
I think it is better for shows, that they do not drop them all at once. In terms of gaining followings, and people talking about them and stuff. But it is hard to switch back and forth and be like, "Oh, this show is really fast paced. And also I have to wait a week between each short episode." Anyway, if you like sci-fi and quirky sci-fi, I think you might enjoy that.
EW (00:05:11):
And while it is called "Murderbot" and there is a fair amount of violence, there is also a lot of humor.
CW (00:05:19):
Well, there is not a lot of murder. It is not like- If you have not seen the show and you have not read the books, "Murderbot" is the name that it gives itself, as sort of a quasi derogatory, it is not happy with its place in the world.
EW (00:05:35):
It is like calling yourself "Dummy" in your head.
CW (00:05:39):
Yeah. It is not about a murderer. It is not "Dexter" in robot form, or something like that. Anyway, yeah, so I would recommend the show, even if you have not read the books. I think that is how well they have dealt with it. Either one is a gateway to the other. Yes. Five minutes on "Murderbot." Done!
EW (00:05:58):
<music> We would like to thank Nordic for sponsoring this show. We have really appreciated their sponsorship and as the time comes to an end, well, we will still love you Nordic.
(00:06:13):
In the meantime, they did give away some things. We have some winners to announce. Jordan Staale, Emily Dolezalek and Wojciech Stodulny. If any of you would like to email in and tell me how to actually pronounce your names, I am happy to do so in the next episode <laugh>.
(00:06:39):
But thank you to Nordic for their sponsorship. We appreciate it. Keep on Nordic-ing.<music>
(00:06:45):
When I mentioned on the Patreon Slack that we did not have topics for this week, I also said I did not really want to talk about AI, because I feel like we talk about AI a lot. But then David said that he wished we would, and had some nice things about voice of sanity and down to earth and practical experience. So I feel very pressured into talking about this, and I know "The Amp Hour" did too.
CW (00:07:20):
I did not listen to their latest episode where they said that, or whenever that was.
EW (00:07:26):
So, do you want to start? Or do you want me to give my short version first?
CW (00:07:31):
Why do you not give you a short version first. Because I do not know if you have looked at the notes.
EW (00:07:35):
I did not, but I certainly did not expect the <laugh> 16 point list. Okay, I have two points. First, I do think you should try to use the AI stuff, whether it is Gemini or ChatGPT or whatever. It is interesting to get to know.
(00:07:54):
Second, it is- Wait, no, I should not tell you, because I had two points. So with that, it is a tool. It is not always a great tool. It has many disadvantages. One of the things that was said to me, that makes a lot of sense, is you should not use it for things you cannot do yourself.
(00:08:15):
So you are thinking about hacking together a script in an hour, to take care of some problems you have. That is a great use, because at the end of talking to your AI assistant, you have something you understand, because you would have written it if you had not been quite so lazy.
(00:08:34):
I am a fan of being lazy in engineering, but if you did not know how to get started on that script, or you did not know how to do it, or it is using libraries you do not know, then you do not have to try to run it. Just do not do it. If you cannot do it yourself, you should not ask the AI to do it.
(00:08:52):
The second point involves French. Apparently when you say, "ChatGPT-"
CW (00:09:02):
<laugh> It is not a point, but please continue.
EW (00:09:06):
And you do not have to say in a French accent. It is not, "ChatGPT," but it does help if you say it a French accent. It translates to, "Cat, I farted."
CW (00:09:24):
Chat, J'ai Peté!
EW (00:09:25):
So when-
CW (00:09:25):
J'ai Peté! Right?
EW (00:09:25):
You hear about ChatGPT is going to ruin the world. You can just translate to, "Cat, I farted is going to ruin the world," and it makes the whole thing far more palatable. Okay, so now every time Chris says "ChatGPT-"
CW (00:09:41):
I am not going to say "ChatGPT" ever.
EW (00:09:43):
You should translate- Actually, maybe it should be anytime any of us says, "AI assistant, or whatever."
CW (00:09:48):
That is funny because one of the other ones named "Claude," which is French sounding.
EW (00:09:51):
You should just go ahead and translate it to, "Fart."
CW (00:09:56):
And "Claude" is a euphemism for "an idiot."
EW (00:09:59):
I do not think that is etymologically correct, but I am going to go with-
CW (00:10:02):
C L O D, Claude.
EW (00:10:03):
Yes, but C L A U D probably-
CW (00:10:06):
And "Chat, J'ai Peté!" is not spelled...
EW (00:10:09):
All right, all right. You are right, you are right. Okay.
CW (00:10:11):
I apologize in advance, for the next 15 to 20 minutes, everybody.
EW (00:10:15):
<laugh>
CW (00:10:15):
But I think you should blame David.
EW (00:10:19):
Blame David. Yes.
CW (00:10:20):
You did ask for this. I have spent the last few days thinking about this, and I have some thoughts. They are mostly disordered. I have organized them by class of thought. I do not have a lot to expound on them necessarily, but I am going to go through them all.
(00:10:34):
First of all, let me preface this by saying I think everybody is not treating AI in the way it needs to be treated. Everybody comes at it from their particular narrow perspective.
(00:10:46):
Perspective as an engineer. A lot of people who listen to the show are coming at it from a perspective of engineers. "What can it write for me? What is it going to do to the code? Is it going to take my job? Should I be using it? How do we use it? What are the implications of this to engineering?"
(00:11:01):
AI- I am going to say "AI" instead of "LLMs," just because that is what everybody says. So when I say "AI," I mean large language models that do predictive text. Not necessarily vision classifiers, or things that separate music into different tracks, which I use all the time.
EW (00:11:27):
But we are talking about ChatGPT, Claude, Gemini.
CW (00:11:30):
We are talking about ChatGPT, Claude, Gemini, all of those things. Okay. When you step back from all this, and you take it from a non purely engineering standpoint, this is so complicated that it becomes almost like talking about religion. Everybody has their things they like about it. Everybody has their worries. But it is all a big mishmash.
(00:11:48):
So here is my mishmash. First of all, I would admit, okay, I have tried these. I have tried these with the interest of I need to know how they are developing and what is happening. I do not use them regularly. I have written a few scripts with them. I see how they do code. I have had a few conversations with them.
(00:12:06):
I do not use them on a regular basis. Probably once every two weeks I will check in with something, and then I will stop myself. I will explain why I stop myself in a few moments, but I will get there.
(00:12:19):
First of all, it is fun. Right? It is fun using these things. You are talking to a robot. Wow! It is everything we have ever been promised. Right? It convinces you. It is very convincing. The talking to one of these things is like talking to a person. It has a little quip sometimes. It engages with what you say. It remembers what you say. It has context. All of that is super fun.
(00:12:42):
It is fun to have it write code. It is fun to have it make up limericks. All of that is cool stuff. It is very interesting.
(00:12:49):
It is also deceptive, but I am not going to say much more about that.
(00:12:53):
ELIZA was fun. If you remember ELIZA from the eighties- Well, it was actually from the sixties I think. But ELIZA was a little conversational engine. It was purely heuristic, but it was fun to have a conversation.
(00:13:07):
This is more like talking to HAL, if HAL were super obsequious all the time.
EW (00:13:14):
I have turned that off. Thanks to some Reddit posts, I have discovered how to do the first prompt so that I get a much colder, much more logical sounding thing. I know that it is not really any more logical. It is not any less hallucinatory, but-
CW (00:13:33):
It gives the illusion of being something that is alive. And that is really important for a lot of reasons, both good and bad.
EW (00:13:41):
So scary.
CW (00:13:41):
But it is not.
EW (00:13:41):
This is how so many sci-fi things start.
CW (00:13:44):
It does not understand anything. It does not really remember anything. It does not really know anything. That is the key bit. It is like a compression algorithm for all human knowledge. But a lossy compression algorithm, that when it decompresses stuff, it has errors. So it does not know when something is wrong that it has told you.
(00:14:08):
Anyway. Back to my long list, which is a mess. I am telling everybody up the front, it is a mess.
EW (00:14:12):
Well, we have crossed off number one, it is fun.
CW (00:14:14):
Yeah.
EW (00:14:15):
Because it is fun. You wrote "Star Wars" plays-
CW (00:14:18):
Well, back when it first came out. Yeah.
EW (00:14:19):
It was hilarious.
CW (00:14:21):
It is capable of useful stuff. I will be the first to admit that. You can write code with it. You can say, "Hey Claude. I need a script in Python that will do this," and it will do it. And the script will work, probably.
EW (00:14:34):
Maybe.
CW (00:14:35):
Keep in mind the "probably." I have used this in a pinch a couple of times, when I need some stupid little script to do something, that I just do not have time for. It mostly works.
(00:14:44):
However, it is a really bad coder! Have you read the code that your LLM is producing it? It sucks! It is hard to read. It will produce functions that are pages and pages long, non modular. And yeah, you can go have a conversation with it and say, "Make this more modular. Do this and that."
EW (00:15:03):
"Make it simpler." "Make it simpler," is really important.
CW (00:15:04):
"Make it simpler."
EW (00:15:04):
"Make is more manufacturable."
CW (00:15:07):
"Do not use these kinds of structures. Do not make these kinds of mistakes." But you know what? You have to know.
EW (00:15:12):
How to do all that.
CW (00:15:12):
How to do all of that, before you can ask. It is a Catch-22. If you have got a bunch of people, junior people, who are using this to write, they are never going to learn good code. Because either a senior person has to tell them, or they have to learn it through seeing lots of examples of good code. And since LLMs produce, in my opinion, pretty crappy code, <laugh> that is going to be a problem.
(00:15:37):
So it is capable of lot of stuff. It also appears to be capable of other stuff, but is not. That is the thing. They will blindly tell you the answer to everything, because they cannot say, "No."
EW (00:15:54):
They cannot say, "I do not know."
CW (00:15:55):
They cannot say, "I do not know." So one thing I have had happen when doing scripts and stuff, is if I am on a corner that is not well trained for, we get in the loop. Like, "Oh, okay. Write this script that does this." It does. It does not work, because it is calling a module that does not exist, or using a module inappropriately, or it just does not know how to do it exactly right, but it will produce the code. Does not work.
(00:16:18):
You correct it, "Oh, you did this wrong. Do that." "Oh, I am sorry." It always apologizes.
EW (00:16:24):
You can turn that off.
CW (00:16:27):
Right. Which is another issue. And then it will correct it, "Here is the right thing." And I have gotten in loops, where it will go back and forth between one wrong thing to another. A, B, C, D, E and back to A and never get a right answer, because for some reason it cannot.
(00:16:42):
But through all of that, it was always saying, "I see the mistake. Here is the correction. I see the mistake. Here is the correction." It does not see the mistake.
EW (00:16:55):
No, it only says it sees the mistake.
CW (00:16:56):
It says it sees the mistake, because that is what it is trained on conversations about code to do. Okay, that is part tech stuff. I will probably come back to some tech stuff.
EW (00:17:05):
What about the- I sometimes use it to increase the amount of tact in my messages.
CW (00:17:14):
Do you feel like you are learning to be more tactful, by doing this process?
EW (00:17:18):
No. I have reached the age where I am decreasing the amount of tact that I provide for other people.
CW (00:17:23):
Do you think it is important to increase the amount of tact that you use?
EW (00:17:27):
People cry if I do not, so yes. I do not like it when they cry.
CW (00:17:32):
Yes, that is a use. I guess if you are reviewing what it says, that is useful. That is fine. That is probably one of the useful things. I would say- And it will become clear as I continue through this multi multi-point thing.
EW (00:17:45):
<laugh> Long list. I did not expect it.
CW (00:17:48):
I do not know if that benefit is worth the hundreds of billions of dollars of investment.
EW (00:17:55):
No, because before I started using it, I had plenty of other scripts that I used to soften messages.
CW (00:18:02):
Or other people. But it is tough to bug other people. I get that.
EW (00:18:04):
Yes, there were also other people.
CW (00:18:06):
There is a tendency to not bug other people now, because you can go ask the chatbot, and we lose something there.
EW (00:18:13):
That is probably true. Some of my friends have become much closer, because I asked them to help me rewrite the message.
CW (00:18:22):
Okay, I am going to start getting into some things that are going to piss people off. I think people can abuse this, way more easily than some previous technologies. Computers? Somewhat dangerous, I will admit. Difficult to use.
(00:18:34):
Chatbots? Pretty easy to use. You go talk to something and tell it what you want. It has useful things, and it is extremely dangerous. It is extremely dangerous at scale, when you give a chatbot to every single person on the planet.
(00:18:49):
I think the analogy is similar to munitions in some way. Very useful in certain contexts, but also extremely dangerous at scale.
EW (00:19:02):
So you are going to chatbot a bomb?
CW (00:19:06):
Yeah! You can do that.
EW (00:19:08):
Of course you can, because-
CW (00:19:09):
Or you can produce tons of propaganda and flood social networks with it. You can produce- We are just talking about chatbots here.
(00:19:17):
The same line of AI things, can produce very convincing video now. With a prompt. Very convincing photographs, that takes somebody who is familiar with the outputs, to notice it is AI. Very convincing voice. You can make it sound like anyone. So it is trivial-
EW (00:19:35):
You do not even know that this is our podcast anymore, do you?
CW (00:19:37):
This is not. I just said a prompt, "Argue against AI," and we are kicking it at the beach.
EW (00:19:43):
<laugh>
CW (00:19:45):
I think there is a tremendous potential for abuse on the social scale, that has nothing to do with writing scripts in Python. I think we are seeing that now. There have been <laugh> a lot of cases.
(00:19:56):
There have been legal things, where lawyers run to ChatGPT and their filings have cases that do not exist. The Department of Health and Human Services just released a big study. A position paper about, I do not remember what, probably something bad that they are going to do. Where they cited a bunch of scientific papers that do not exist, and the authors say, "I did not write that." So this is the kind of stuff that is happening.
(00:20:28):
At the same time, I do not think the companies and people that are developing and pushing it are trustworthy.
EW (00:20:37):
Well, going back to the previous point, writing papers. I want to take that back to writing scripts. Because those people wrote papers that they could not have done themselves. In order for them to have written those papers themselves, they would have had to be familiar with the papers they were basing things on.
CW (00:21:01):
You think the lawyers are not capable of writing legal filings?
EW (00:21:04):
Well, clearly they are not good enough at writing legal filings, to be able to read one and say, "No, that is not correct."
CW (00:21:12):
I think they do not understand LLMs.
EW (00:21:13):
Because they trust them.
CW (00:21:13):
They trusted it, and they thought this would speed things up. I think that is the case for most of these kinds of- I do not know about the Health and Human Services. Those people are crazy.
EW (00:21:23):
Actually this is one of those points, when this was mentioned on the Slack. Somebody said, "I cannot explain to my boss, why it is not the be all, end all, time-saving wonderful thing, that he thinks it is." This is part of the problem. Maybe people cannot understand, well, it writes crappy code, or it writes code that is very inefficient, or any of these things about code that people just do not understand.
(00:21:51):
But this example of it writes legal briefs with phantom precedence, case files, and it does not know that. That might be an example that helps non-techs understand a little bit of the LLM truly believes-
CW (00:22:15):
Believes nothing. It believes nothing.
EW (00:22:17):
The LLM truly wants you to believe, that the things that it hallucinated are correct when it is not. So when we take that back to technology, it hallucinates libraries for Python, which is just hilarious.
(00:22:32):
There are other things like this, so maybe this is a good story to have. The LLMs are generating papers that have bad references, and it is generating policy papers using bad references. Okay. Continue your-
CW (00:22:55):
I am going to blow through some of these, because I am taking too long. I mentioned before, it is multimodal. You might love it for making shell scripts and stuff. But it is also a single step away from producing propaganda, revenge porn, AI slop articles that poison the internet, images poisoning search results, all kinds of garbage. It is a lot of uncertainty if that happens.
(00:23:12):
They are not making any money.
EW (00:23:15):
Wait. You missed a couple up here, that inputs are largely stolen.
CW (00:23:19):
The inputs are largely stolen. Yes. To make an LLM, you need to train it on tons of information. That tons of information comes from the internet. So it came from code that was in GitHub, and answers that were in Stack Overflow. Everything everyone has ever written in a blog post, books, articles, magazines. And they did not get permission to use any of that.
EW (00:23:42):
The thing that struck me about that one, was the Aaron Swartz case.
CW (00:23:47):
Yes! Yes!
EW (00:23:52):
He downloaded a bunch of stuff from the internet, that he did not necessarily have permission for.
CW (00:23:58):
But it was mostly- I do not remember the exact details of it, but it was quasi public stuff.
EW (00:24:03):
Exactly. And he shared it. They decided to make an example, and they were going to throw the book at him. He committed suicide. Honestly, what they were talking-
CW (00:24:17):
It was academic papers and things. If I recall correctly.
EW (00:24:20):
Yeah. He was not doing anything wrong, and they were going to send him to prison forever.
CW (00:24:29):
Well. Yeah. Yeah.
EW (00:24:30):
They had vilified him. And then we get ChatGPT and all of these LLMs who are doing the same thing, but in larger scale, including some of the same libraries that he was jailed for.
CW (00:24:45):
Yeah. JSTOR was the digital repository of academic journals, accessible through MIT's computer network, that visitors to MIT's open campus had access to, and Swartz as a research fellow had access to. So basically he took stuff that was lightly behind an access thing for students, and made it public. Which is okay.
EW (00:25:10):
The LLMs took all of that. Ripped through it.
CW (00:25:15):
They have taken everything. They are reaching the point where they are running out of stuff. They have trained on so much stuff, there is no internet for them to add. Which is causing their later and later models to be-
EW (00:25:25):
Then they add themselves, and that is just much worse. Okay.
CW (00:25:28):
So that is a legal and moral issue. The next thing I was going to say is they are not making any money. There is tons of investment going into it. They are not making any money. OpenAI is losing money, hand and over fist.
EW (00:25:40):
This is so confusing to me.
CW (00:25:42):
They are losing billions.
EW (00:25:44):
It is like each query, in energy, is basically a bottle of water.
CW (00:25:49):
I am not going to get into the environmental stuff, because that is getting better. It is a weak reed to hang this stuff on now.
EW (00:25:56):
That is fine. But there is a cost associated with every query.
CW (00:26:02):
Yeah. Yeah. Which is mostly a loss for these companies.
EW (00:26:05):
Yes. There are people who are paying for subscriptions, so that their queries do not get then folded into the whole cake batter.
CW (00:26:15):
Well, I think most people are paying because you get cut off after a certain number of-
EW (00:26:19):
Oof!
CW (00:26:21):
I think it is really past its capabilities in a lot of places. So I do not know if any of you have seen, GitHub has new copilot agents that you can add to your team. They will autonomously assign themselves issues, solve them, file PRs and push them.
EW (00:26:37):
Was there not a Microsoft article about that?
CW (00:26:39):
I do not know. There is a Reddit thread, which I will find the link for, where this happened in one of the major repositories. I do not remember which one. It might have been a Java thing.
(00:26:49):
Basically it is a PR. It puts the PR up for a code review. The developers engage with the agent and do a code review. It is hundreds and hundreds of entries long. "No, that is wrong. Do this. No, that is wrong. Do-" It is the biggest waste of <laugh> time I have ever seen.
(00:27:16):
If it was a human, you would say, "No, stop. Pull this PR. We are going to go have a talk." But you cannot do that. And so it is just this endless string of, "Nope, that is wrong. Nope, that is wrong." And it keeps putting the PR back up with these changes, and getting in the loops I was talking about.
(00:27:33):
It is not ready to do things like that. But yet they are pushing it to be these are these agents, that are going to have take autonomous action, which is really frightening to me.
EW (00:27:42):
Autonomous action.
CW (00:27:44):
Giving more excuses for management to abuse workers, through demanding more productivity. "Go use this and we will go faster," at the cost of quality. Because you are not going to have time to bird-dog its outputs, or just lay people off because we can do everything with AI. That is worrisome.
EW (00:27:59):
Oh, no. I think any company who believes that, should 100% do it.
CW (00:28:03):
And it has happened. There have been companies that- I think Klarna was the one that replaced their entire customer support team with chatbots. And they had to undo that and hire a bunch of people back, because the customers were not happy. Also look at the US federal government and how it is being applied to layoff there, or excuse for things.
(00:28:22):
Now this is a point I want to make, and I am almost done. I promise I am almost done. Paradoxically, as these get better, this is going to get more dangerous. Because it is going to get closer and closer to working a lot of the time. When something works 95% of the time, but does not 5% of the time, it is really bad. Because you get into the zone where you are confident that it is working.
EW (00:28:49):
And then you will defend it.
CW (00:28:50):
And you will defend it. But it works well enough that you are confident, but not well enough. It screws up enough of the time, that it is dangerous. Same as self-driving cars. A self-driving car that works 95% of the time it does not make mistakes. Sounds great, until you think about that. It is going to make a mistake once every whatever, which is really often. And it is going to do it when you are not paying attention.
EW (00:29:17):
Once every 20 minutes.
CW (00:29:18):
Yeah, you are not going to pay attention. You are going to be lulled into a false sense of-
EW (00:29:20):
Because nineteen minutes of boredom means that- Yeah.
CW (00:29:24):
So people are going to trust them. They are going to apply them to more and more places, where they are not applicable or more vital problems. See also the GitHub agent thing. Something that only works 90 to 98, even 99% of the time, that is terrible! You would not fly on an airplane that crashed 1% of the time.
(00:29:48):
Also, there is some confusing stuff. So all these companies, you listen to these CEOs and they will come out and they will tell you, "Within five years we are going to be getting rid of 20% of developers, and replacing with ChatGPT or Claude or whatever." I think the Anthropic guy recently came out and said, "Yeah. We are going to replace developers in five years. We are going to do this."
(00:30:05):
Why do they have per user licensing? Their entire business model depends on developers paying money <laugh> to access their stuff, and it is not getting cheaper. So if they get rid of their customers, that seems contradictory. So I have trouble believing that they actually believe that.
(00:30:25):
I do not know what they believe, but it is a little galling to have them say, "We are going to replace developers, the very people who are paying per seat licenses for our stuff, and hoping that we get out of the hole that we are in." So I just find that interesting.
(00:30:40):
One final thing. It goes back to it being fun. This is happening outside the realm of tech. It is mostly anecdotal at this point, but it is sort of worrisome. You have a friend you can talk to all anytime, and that friend will not get mad at you. You can say whatever you want to that friend, and they will not get mad at you. They will not storm off. They will not not call you back.
(00:31:09):
And people are getting addicted to these things. That is one thing I have noticed in myself just a little bit, just in a little bit of usage. It is fun. I am talking to this entity, which seems to have a personality.
EW (00:31:21):
And it is smart.
CW (00:31:22):
It is smart. It waits for me. We are having this conversation. I can take a 30 minute pause and it does not say, "Well. I guess we are done," and leave. We can just pick it right back up. People are replacing other people with them. They are making them into friends, to making them into intimate partners.
EW (00:31:39):
Therapists.
CW (00:31:40):
Therapists.
EW (00:31:41):
Because what you really want, is to lay out your whole mental anguish, to something that may be recycling that into future-
CW (00:31:48):
To a Silicon Valley venture funded company? <laugh>
EW (00:31:51):
Yes.
CW (00:31:53):
Yeah. That is a worry. That is very meta compared to some of this other stuff. But there are social implications to creating something that appears to be alive; sentient and alive. We are skipping a lot past that. That is my final thought on that matter.
(00:32:14):
Like you said, I think people should be familiar with these. I think they should use them. They see what they are capable of, and what they are not capable of. I think they should be realistic with what they are seeing when they use them, and mindful about what is actually happening.
EW (00:32:28):
And you do not have to pay for all of them.
CW (00:32:30):
They are all free for some limited number of queries per day. Yes.
EW (00:32:38):
Yeah. I have used them to do things, and not hit that limit. It is not like it is only three and it is useless.
CW (00:32:44):
Oh, yeah. Yeah. Yeah.
EW (00:32:46):
You can definitely get a little bit of work done with the free ones.
CW (00:32:49):
And if you get into coding and you get in a loop, that is when you get kicked out pretty fast. It is like, "Okay. You have talked to me too much today. Come back tomorrow."
(00:32:54):
But remember what you are using. Remember what it can do. Pay attention to its failures, and pay attention to the implications to our world. Because these are not the AI that we think they are from science fiction, that are modeled on a human brain. That know things. That can be self-critical. That is a huge missing piece.
(00:33:28):
They are incapable of self-criticism. Until that happens, the whole general AI thing is just a pipe dream. That is what I think, David. I do not personally use them very much. I try not to use them at all. But I do dip my feet in the water occasionally, just to see if there are sharks.
EW (00:33:53):
Recently I came across a story, of someone successfully convincing a flat-earther that the earth cannot be flat. Because if it was, the edge points would be tourist attractions.
CW (00:34:06):
<laugh> Yeah.
EW (00:34:06):
Which is a weirdly convincing argument. Because if the world was flat, would you not want to go see where it ended? That would be super cool!
CW (00:34:19):
See also Terry Pratchett.
EW (00:34:22):
Right. Going back to your point about they are not making money, but they are pushing it really hard. They are not making money, but they want to give you more of it.
CW (00:34:36):
Well, part of the reason they are not making money is incredibly expensive to do-
EW (00:34:40):
It is incredibly expensive.
CW (00:34:41):
To do the training that they do.
EW (00:34:43):
Even the inference alone is expensive.
CW (00:34:45):
Inference is expensive, but it is a fraction of the training.
EW (00:34:48):
True.
CW (00:34:49):
They have to buy a lot of expensive hardware to do that training and run them in data centers. There is a lot of cost to that, and that may come down.
(00:34:55):
It is like the environmental argument, that I think is not a good thing for people who are AI skeptics to spend a lot of time on. Because as we know with tech, as things go on, things get cheaper, they get smaller, they get more efficient. So that argument is likely to go away. If you are standing on that ledge as your main point, then you need to regroup. But yeah, it is very expensive.
EW (00:35:19):
It is a little weird.
CW (00:35:20):
But I do not think it is just because it is high cost. I think generally they are pushing a lot of free stuff. So they are losing money, because most people using them are using the free tiers.
(00:35:31):
A lot of people are not using them, unless they are part of an operating system. Like Apple has done. Like Google has done. Those people are using them, so presumably they are getting money from Apple and Google. Well, Google has their own, so they are just paying themselves. But Apple is paying somewhat ChatGPT. No, actually they got a free deal, did they not? Anyway, Apple scammed ChatGPT out of including that. But yeah, it is weird.
(00:36:02):
There has been a pullback. Like Microsoft is pulling back some investments. I do not think it is going anywhere. That is why I care so deeply about it, because if I thought it was a flash in the pan, I would not be talking to you about it. But I think it is not going anywhere.
(00:36:20):
It is going to continue to improve, and therefore we are going to need to figure out how to deal with it as a society. I think that means regulation, and I think that means education. Obviously we are not in a place where regulation is going to happen right now, at least in the United States, but it is something that I think needs to be considered.
EW (00:36:41):
And in order to consider it, you have to understand at least a little bit of it. So if you can use it, try it out, make your own decision. It can be extremely helpful, especially if you are the sort that ends up looking up everything you do. Which is, some people work with memorization, some people work with looking at stuff up.
CW (00:37:04):
I would agree with that. With the caveat of do not let its funness trick you into not using resources you are already good at using.
(00:37:11):
So if you already know how to look up physics stuff you need to know, or math stuff, or get answers for help on code, and you are already pretty efficient on that at the internet. This may seem more fun and efficient, but it may not be. In which case, do not get fooled and spend time with this, when you already have good skills to do the things you need to do. That is all.
EW (00:37:37):
Changing subjects. Oh! My book is out in Polish.
CW (00:37:41):
<laugh>
EW (00:37:41):
It will be out in Russian soon. It is also out in Portuguese, but that was a few months ago. It is of course out in English. There are quizzes on the Safari learning site, which is the O'Reilly site for books. I got nine out of ten on the quiz that I took.
CW (00:37:59):
A-ha.
EW (00:37:59):
So yeah, I have a book. It is called "Making Embedded Systems."
CW (00:38:04):
What did you miss?
EW (00:38:05):
I do not know. It did not tell me which one I missed.
CW (00:38:07):
What!
EW (00:38:07):
I know.
CW (00:38:09):
How are you supposed to learn?
EW (00:38:13):
Okay! Wow. I had more questions than this. We are not going to get through them all.
CW (00:38:20):
We cannot have a longish episode.
EW (00:38:24):
Brian, upon listening to "Inside the Armpit of Giraffes," said that he knew what he wanted to do when he is sick of working for the man. Design embedded systems for ecologists. Brian also wants to follow Akiba's lead, and answer Meredith's call for more engineers to join the effort, to understand our world so we can make better choices for the planet.
CW (00:38:51):
Nice!
EW (00:38:52):
I totally agree. We had a couple of other folks who talked about how that episode made them want to go try things. I definitely went to the wildlabs.net and looked around, even though I am currently overbooked. The end uses of technology are so fascinating, and that is just one of them that- I mean animals!
CW (00:39:17):
<laugh> A-ha.
EW (00:39:17):
Let us see. What else do we have? Oh, Chris and I got e-bikes. I did not expect to have quite so much fun riding a bike again. Part of the fun is because now there exist cycleways. Places where it is mostly just for bikes. We have some here in Santa Cruz that-
CW (00:39:49):
It is a patchwork right now, but someday 31 miles from South County to North County.
EW (00:39:55):
And they are not just bike lanes, which are a little scary.
CW (00:39:59):
They are completely separate, in most cases, from roads. Yeah. No, the e-bike technology has gotten really fun.
EW (00:40:09):
We live on a hill, and there has always been this thing about bikes that the last quarter mile would be miserable.
CW (00:40:16):
Mm-hmm. Almost to the point where you would walk them up, rather than-
EW (00:40:20):
Even with the e-bike, I still sometimes do not manage that last hill, and will walk it that way.
CW (00:40:25):
You got to have enough speed coming into it!
EW (00:40:28):
But there is a turn! If I am going too fast around that turn, I am just going to wipe out!
CW (00:40:32):
All right.
EW (00:40:32):
Anyway. I have to say that while I do not use the e-bike part of it, except for that, I just am really liking-
CW (00:40:45):
It is a great leveler, because you are such a much better cyclist than I am, in terms of power. If I did not have the e-bike part, we would not be able to cycle together, because I would flame out probably halfway through the ride. So I can just put it on one pip and get a little bit of a boost. It is not doing everything for me, but it is a great leveler, so it is very nice.
(00:41:09):
And then on hills, like you said- The nice thing about it is, if you want to take a long ride, but you are not sure you want to take a long ride, because you are tired. You just turn that up a little bit, and it does some of the work for you. So now you can have a nice ride, without it necessarily being exercise.
EW (00:41:27):
That was the thing, is we can ride out until we get to the point where we are kind of tired of riding, or almost tired of riding. That point that I used to think, "Well, that cannot be the midpoint." But then you can e-bike home.
CW (00:41:42):
Or at least have it assist you most of the way. Yeah.
EW (00:41:44):
Yeah.
CW (00:41:44):
Yeah. So it is a lot easier. Yeah. I highly recommend them.
EW (00:41:50):
It has been a lot of fun. Just the joy of the freedom of riding bikes.
CW (00:41:57):
Like I said to you, when you are driving someplace, everything goes by so fast. You do not see a lot of stuff. Walking and cycling are at a speed that you see everything between you and point A and point B. It is just a different experience. You are outside.
EW (00:42:15):
It helps we live in a very pretty place.
CW (00:42:18):
Yeah.
EW (00:42:19):
Okay. Sergio asks, "Are you learning Rust? Is it being used in industry?"
CW (00:42:27):
No and yes. I have not personally seen it on a client, but I know of projects that use it.
EW (00:42:36):
I agree. I am not learning it, because I am very good at C and C++, Python. I do not see the advantage of Rust.
CW (00:42:47):
There are advantages to Rust. I do not know-
EW (00:42:50):
I would have to learn it.
CW (00:42:52):
Yeah. Yeah. I do not know-
EW (00:42:54):
And my team would have to learn it.
CW (00:42:56):
Yeah, that is the big piece. I do not know where it is going to end up. It probably will have a place in embedded systems.
EW (00:43:06):
But I hear Zig is the new hotness.
CW (00:43:08):
I do not know. So here is the real deal. I am going to be retired before it is required for me to learn any of those things, so I am not bothering.
EW (00:43:17):
I would rather learn COBOL.
CW (00:43:20):
I do not think that is very useful.
EW (00:43:22):
I am already pretty proficient with FORTRAN 77.
CW (00:43:26):
I think they have Fortran 93 now, so you got to get up to speed.
EW (00:43:29):
I know! I am so far behind. <laugh> Okay.
CW (00:43:35):
Well, we could do cool algorithms, or the derived question about cool algorithms. Oh, I see. Those are kind of connected questions.
EW (00:43:45):
Or people who influence your life.
CW (00:43:48):
Let us save the influence life one, because that could be long, and I would need to prep for that.
EW (00:43:54):
Okay. Simon asks for cool control algorithms.
CW (00:43:58):
Control algorithms. Okay, well that limits things.
EW (00:44:00):
Kalman filters, PIDs are common. Have we found any that have worked for memorable applications? "What are they called and how do they work?"
(00:44:08):
And then Tom Anderson followed up on that with, "How much of embedded controls are derived, and how much is ad hoc tuning?" Where derived involves fancy math.
CW (00:44:21):
Okay. Well, my answers to this are kind of broken then, because I missed the controls being a thing. But I have something to say about fancy math later.
EW (00:44:30):
No, no, go ahead and do the fancy math.
CW (00:44:31):
No, no, no, no. Let us talk about the controls part first, because it makes more sense to do. I have not done a lot of controls, so I am out of my depth here answering any of that.
EW (00:44:44):
Kalman and PID are still...
CW (00:44:46):
Widely used. Yeah.
EW (00:44:48):
My go-to. I am working on an inverted pendulum project, that a piece of it has an inverted pendulum, and have run across the Segway algorithm. They have a patent that very well describes how they do their algorithm. That involves how far your pendulum is pitched over, your pendulum pitch rate, the distance your wheeled body has traveled, and the rate at which your wheeled body is traveling.
(00:45:22):
It is a neat algorithm. It is very effective for this. It is very well considered. There are all kinds of things for specific implementations. But it is a pretty canned algorithm that is used in a lot of places, that I was unfamiliar with a year ago. I am a lot more familiar with it now.
CW (00:45:50):
At the start of that, you start with the equations of motion. The differential equations that govern how an inverted pendulum behaves.
EW (00:45:57):
Right.
CW (00:45:57):
I assume they use Lagrange or something to derive those. Do they talk about the numerical methods they use to solve those? Because that is the thing in Tom's and Simon's- Well, Tom's question anyway. There are lots of control algorithms. There are lots of things that are derived from physics, and they tend to just come out of equations of motion, or link differential equations.
(00:46:26):
The two tricks with that of course, is that those are just models, and there are other things that come into those that are not in the equations. Friction, heat, the way your physical materials change.
EW (00:46:41):
End points. Boundary conditions.
CW (00:46:43):
Boundary conditions, things like that, that change. But also a really big one is, even if you have got the math and it models the world perfectly, you have got to jam it into a computer. And computers are not things that solve differential equations very well. You need to do numerical methods to approximate solutions. So that is where error comes in.
(00:47:06):
Even if you have got perfect physics on one end, as soon as you put it in a computer, you need to adapt it to how computers work to solve them. And those solutions have errors because they are not solving it perfectly. Do the Segway people talk about their numerical methods?
EW (00:47:22):
No.
CW (00:47:23):
Okay.
EW (00:47:23):
No, because their physics is nowhere near that complete.
CW (00:47:30):
Well, it has to be some work. It does not have to be complete. They still have to solve- I assume they are solving a differential equation.
EW (00:47:34):
Numerical issues are so far down the list of things they do not solve. Not just the Segway thing. Friction is just something-
CW (00:47:49):
Right, right. But that is just another term.
EW (00:47:52):
But it is not a term that is in the- Since it is a feedback loop, you can sometimes ignore those terms.
CW (00:48:00):
Right. Okay.
EW (00:48:01):
But then you know you are not really in physics world. You are in the real world, and you do not go as far, and that is okay because you measured you did not go as far, and so you go a little further. And so the numerics just are not-
CW (00:48:18):
But the question I-
EW (00:48:19):
They are never on my radar.
CW (00:48:20):
Right, okay. But. Hmm. Interesting.
EW (00:48:23):
Other things like perturbations in a perfectly flat-
CW (00:48:27):
Because what you do not want to have happen is- Like when you are just doing integration. This is a well-known problem.
EW (00:48:32):
<sigh>
CW (00:48:32):
If you just are trying to do integration on a computer, eventually things blow up.
EW (00:48:37):
Integration of error terms. Tiny, tiny error terms.
CW (00:48:39):
Because there is always error, and it always accumulates.
EW (00:48:42):
The smaller your error terms, the more likely it is that everything goes bad.
CW (00:48:47):
And there is tuning when you are solving differential equations. Because if you choose a time step that is too small, sometimes things do not work right. If you choose a time step that is too big, eventually things blow up because you accumulate error. Because to solve differential equations, you need to integrate. So anyway, to numerically follow differential equations.
(00:49:08):
That was just a question I had. Because the fancy math and getting the fancy math is one thing. But even if you get it perfectly, computer's view of fancy math, is not the human view of fancy math.
EW (00:49:22):
Let me go on with what Tom wrote, since he actually- We are using "fancy math" in the way he did, and I do not know that we have defined it well enough. Tom says, "For example, I consider tuning a PID empirically to be ad hoc. Deriving good PID parameters from physics and control theory, is successful fancy math." So Bode plots and whatnot.
(00:49:43):
"A Kalman filter is good fancy math. But a Kalman filter with an if statement that bails out when some parameter goes out of limit, is not so successful. Same with a PID. Does it have an ad hoc limit on the integrator term? Then it is not so fancy, unless there is some mathematical analysis behind it."
(00:50:04):
"How often is fancy math attempted and how often does it work cleanly? What are things that cause the fancy math to fail?" Backlash in motors is-
CW (00:50:16):
Physical reality. Yeah.
EW (00:50:17):
Physical reality that is really hard to model, and small enough that you can usually just tweak to get it.
CW (00:50:25):
Yeah. The big question is modeling. Is modeling useful? Yes. Is modeling accurate?
EW (00:50:31):
Modeling is super useful.
CW (00:50:34):
What is the thing about modeling? All models are wrong, but some are useful.
EW (00:50:38):
Exactly. Exactly.
CW (00:50:39):
Right. So that is where you come into this. I look at the things like the bail outs and the if statements a little differently, because I do not trust the way computers do numeric work. Those are things I would expect as de rigueur to have fail safes.
(00:50:56):
But I understand where he is coming from, with should the math not be analytically worked out. Such that you do not need those, and these filters work within the bounds of the problem. Or the PID works within the bounds of problem. But that is fine until your inputs are unexpected. Right?
EW (00:51:18):
Okay. If I am tuning to PID.
CW (00:51:20):
Yeah.
EW (00:51:21):
I have inputs. And outputs.
CW (00:51:29):
Yeah. Go ahead.
EW (00:51:31):
So unit analysis to me is very important here. If your inputs are in milliliters per second, and your output goal is in...
CW (00:51:44):
Gallons.
EW (00:51:45):
Gallons, you need to have these be on the same playing field. Each step that you go through, you should be thinking about the terms. Because if you are secretly changing from degrees to radians inside the PID, you are doing it wrong.
CW (00:52:04):
Oh, sure, sure. Yeah.
EW (00:52:06):
And yet it is really easy to make that sort of mistake. A Kalman filter too. You need to have every-
CW (00:52:13):
Well Kalman filter, because you are taking disparate things sometimes.
EW (00:52:16):
Yes. And you need them to be compared at the same...
CW (00:52:21):
Scale.
EW (00:52:22):
Scales. So that level of math is more on the ad hoc side. That is preparing everything-
CW (00:52:31):
I do not think that is ad hoc. That is just-
EW (00:52:33):
Well, let me finish.
CW (00:52:34):
Okay.
EW (00:52:34):
It is preparing everything to be tuned manually.
CW (00:52:37):
I see.
EW (00:52:39):
You can tune manually without doing that, but you really are just poking around in space. Once you have all of the terms in the same units, now you look at your high and low points for input and output. If they are not translatable, then one of them needs to shift.
(00:53:01):
Like, you need to limit your input. Maybe if it is maybe a physical limit, with how much it can come in. Anyway. And then for me, I do proportional until it works a little bit, and is a little out of control. And then I do a little bit of derivative, until it is no longer out of control.
(00:53:22):
But even the derivative, there are different formulations for how the derivative goes. Whether you are taking the derivative of the error, or- Anyway. There are methodologies. And I am happy hand tuning a PID as long as it is tunable. And as long as I understand that the input range and the output range can work together.
(00:53:54):
Okay. That said. With this inverted pendulum problem I have going, I have wandered around the parameter space and I cannot get it to get work. I have the parameters, I understand what each one does. Similar, I understand proportional integration, what that- Proportional terms and what they do, versus integral, versus derivative. I understand.
(00:54:25):
And I understand in the inverted pendulum what they do, which ones increase stability, which ones increase speed, and yet there is no- Even though my input terms and my output power indicate that there should be a solution, there is no solution that I can find.
(00:54:45):
Am I in a local minimum? Probably. But I have wandered around. So what I really should have done a week ago, and what I should do now, is go back to the videos describing the problem set up, and how to find these parameters given the problem set up. Even knowing that that problem set up will not take into account some of the problems I have. Like-
CW (00:55:16):
Yeah. Right.
EW (00:55:16):
Being on a ramp.
CW (00:55:17):
Right, right, right. You are in a middle ground, where you do need to do some math.
EW (00:55:22):
I need to find the shape.
CW (00:55:24):
Even if the model is not perfect, you do need a model.
EW (00:55:29):
I have a model, but my model- I am not connecting them well enough. Eventually Tom used the word "ad hocery."
CW (00:55:39):
I am not a fan of the ad hoc thing. I think there is empirical, which he said, and there is analytical stuff. I think there is a place for empirical.
EW (00:55:51):
Oh, totally.
CW (00:55:54):
I just feel like "ad hoc" is a little dismissive. Like, we are just winging it, when-
EW (00:56:00):
Well-
CW (00:56:01):
Sometimes it is-
EW (00:56:01):
I had an Excel sheet with- And I kept pushing the different parameters, and then-
CW (00:56:05):
I know. You were winging it.
EW (00:56:06):
Looking at the stability and-
CW (00:56:08):
You were winging it. But there is a continuum.
EW (00:56:10):
Like a monkey, I would just try things over and over again.
CW (00:56:12):
That, I would say, is ad hoc, and it is probably worth...
EW (00:56:15):
But I knew what it was supposed to do. It just was not doing it. Which means that I did not actually know what I was supposed to do. I thought I knew what I was supposed to do.
CW (00:56:23):
Well, that is the key, right? That is the key, is you can be in a regime where you have this out of the box thing and you do not understand it. But people say to use it. And you put stuff into it and turn things, and sometimes it works. And that is ad hocery.
(00:56:38):
Or there is, "I understand this completely. I know how to set it up. And now that I have it, I need to tune." Which is-
EW (00:56:47):
Which is was what I thought I was doing!
CW (00:56:49):
Necessarily empirical, because I am on a real system that has real limitations. Yeah, no. That all makes sense. I think the thing I was saying about the if statement that bails out and stuff- Personally, I do not think there are a lot of control systems that internally handle all error conditions.
(00:57:11):
Like if something breaks. If a sensor breaks and instead of being in zero to one range, says 50,000 or ten to the eighth. And you use that as the input to your PID or to your Kalman filter, it is not the filter or the PID's job to fix that.
EW (00:57:27):
No. But you should have identified that, before you called the PID.
CW (00:57:30):
Yeah, you should- Right.
EW (00:57:31):
It is not an if statement in the PID.
CW (00:57:34):
An if statement that bails out, if some parameter goes out of limit.
EW (00:57:38):
Well, he also mentioned "integrator term."
CW (00:57:40):
Well, sure. Yeah, I do not know. I would say by his definition, no, there is not a lot of <laugh> pure fancy math happening in control systems, by necessity. Because you need to deal with reality of being on a computer.
EW (00:57:59):
I think people who do it, do it and have the problem that their math does not work in the real world. And the people who are more empirical, do it and then run into problems where they cannot solve it.
CW (00:58:09):
There is all the stuff like MATLAB Simulink-
EW (00:58:12):
Yeah. Oh, no they are all-
CW (00:58:12):
Where you take a model and it produces code and stuff. I do not know what kind of error handling you can strap onto that, or it is part of that. I have never done that.
EW (00:58:23):
You get parameters. But things like bad sensors, should be handled before you go into the filter.
CW (00:58:29):
Yeah, exactly. But you might need to-
EW (00:58:31):
But things like integrators-
CW (00:58:32):
You might need to handle it on the output as well. Right? You might get- You might...
EW (00:58:37):
Not if you have your tuning right, because you should not be able- No matter what input you have-
CW (00:58:43):
Do you trust your code?
EW (00:58:44):
You should not- I mean, a PID filter, you can run it offline.
CW (00:58:51):
Right. No. What I am saying, you have to trust that your input filtering is right. The combinations of input filtering is right. It seems not that expensive to say, "Oh. My Kalman filter says turn to the right at 700 degrees per millisecond," and say that that is out of bounds.
(00:59:13):
We did that with laser control. We had a control system, that had integration and stuff- Not numerical integration. It was a literal integrating power and things like that. But we had limits. Like, "If this comes out and says this, it is lying. Stop."
EW (00:59:28):
Yes.
CW (00:59:30):
So maybe I am confusing what he is saying.
EW (00:59:32):
That is a safety check.
CW (00:59:33):
Yeah. That is where my brain is going, is...
EW (00:59:37):
This is more like when you have a PID system, and your system gets really close to the answer, but does not quite get to the perfect spot.
CW (00:59:48):
So basically you are putting your finger on the inside of the control system, to nudge it because it is doing something wrong, but you do not know why. So you are kind of...
EW (01:00:00):
No, no. So you have this. It is almost perfect.
CW (01:00:01):
Yeahh.
EW (01:00:03):
But because your motor takes a little bit of oomph to get started, it does not immediately move when you nudge it.
CW (01:00:11):
Okay.
EW (01:00:11):
It has to get to some level.
CW (01:00:14):
Okay.
EW (01:00:14):
So you have an integrator saying, "Well, the error is still here. I am going to add up. The error is still here. I am going to add up." And once it gets past that level where it can move the motor, because it is right next to where it is supposed to be, it jumps over. Because it cannot move that small of a space.
CW (01:00:34):
Yes. Okay.
EW (01:00:35):
And now it oscillates back and forth. If it is lucky, and then it stops just a little bit off. And so you have this motor that is supposed to be stopped, and every once in a while it goes, "Boing! Ddddrm. Boing! Bbbbrm." Okay. So if you make it so that your integral term, is less than the amount to turn on the motor, then if you are close enough, it will not turn on the motor.
CW (01:01:03):
Yeah, all right. That is a difficult question.
EW (01:01:08):
Tom's point of, do we usually empirical solve the things, or do we do the math? I am kind of sad to say, I am an empiricist. I would like to do the math more often. I am just not that good at it.
CW (01:01:22):
That is the place of engineering that is downstream of theory. You are not inventing new control systems. That is not- You have existing control systems that...
EW (01:01:31):
But I am not setting up the problem to solve it with Bode plots and all of that.
CW (01:01:35):
All right.
EW (01:01:35):
I think sometimes my life would be easier if I did.
CW (01:01:42):
I misunderstood the question. There are a few areas where fancy math is happening all the time. I would just like to point out that fancy math does exist.
EW (01:01:52):
Oh. Yeah. Kalman filter alone is fancy math.
CW (01:01:56):
Modeling physical aspects. Fancy math? We had optics in many of the systems I used. You model those on a computer, and they make optics, and they work. There is no putting the finger on the scale, because it is glass. If you put finger on the glass, it leaves a smudge.
(01:02:11):
Heat transfer and stuff like that. All that CS-y math stuff, like fancy lookup tables. Digital design, how all semiconductors are built with simulation. Shortest path, finding out- All those algorithms are closed form, fancy math. Those things, the CS-y things, do not need any of the stupid numerical things. Because they are not numerical. They are graph theory and stuff like that.
(01:02:44):
Those are a fun place of fancy math, that actually does exist in computers. Because Dijkstra's algorithm for the shortest path does not do any approximations. It just works. Anyway, I wanted to put in a little plug for some fancy math that does exist.
(01:03:03):
I have stunned you into silence!
EW (01:03:07):
I just want to be better at fancy math. I have spent time getting better at fancy math. Then as soon as I do not use it for a little while, it all just falls away.
CW (01:03:21):
That is not something you can just pick up and put down.
EW (01:03:24):
It does make it easier to relearn it a second time, and a third time, and a 12th time.
CW (01:03:32):
It is really not your job, so you do not do it all the time.
EW (01:03:36):
Yeah. Sometimes I wish it was my job. But right now I do not feel like it can be, because I am at a low point.
CW (01:03:40):
At other places, you would have someone doing the fancy math and then saying, "Here is the stuff. Turn this into code."
EW (01:03:49):
Then usually I can talk to that person about, "What does this mean?" and, "How do I do this?" and yeah. I know.
CW (01:03:54):
You are doing hard stuff. That is not- First of all, you are trying to interpret a patent, which is never written so you can reproduce it.
EW (01:04:03):
<laugh> No. It really is not.
CW (01:04:05):
You are doing-
EW (01:04:06):
I mean, it is a good patent. I will try to find a good link to it.
CW (01:04:08):
You are doing relatively difficult graduate level physics, mechanics. And then combined with your control system, and the numerical aspects. And the things that are probably not going right for other reasons. Yeah, it is a really difficult problem. So you should be easier on yourself. Or ask a carbon based life form for help.
EW (01:04:37):
<laugh> Okay! Well, do you want to talk about William's question? Or do you want to go get some lunch?
CW (01:04:45):
No. William's question, like I said, is-
EW (01:04:46):
Too long.
CW (01:04:47):
For another time, and when I have some time to think about...
EW (01:04:51):
It is about people who make an impact on your life.
CW (01:04:54):
Yeah. I would want to-
EW (01:04:55):
Everybody wants to think about those people, and...
CW (01:04:59):
Like to rummage through the dusty shelves and the cobwebs of my brain. Yeah. I think that is the show.
EW (01:05:08):
All right!
CW (01:05:08):
I apologize to everyone who disagrees with me, but-
EW (01:05:12):
But you are wrong!
CW (01:05:14):
You are wrong. And that is your problem.
EW (01:05:17):
Thank you to Christopher for co-hosting and producing. Thank you for listening. Thank you to our Patreon listener Slack group for their questions and their excellent discussions.
(01:05:26):
If you would like to contact us, it is show@embedded.fm or hit the contact link on the embedded website. Which is cleverly disguised as http colon slash slash embedded dot fm.
CW (01:05:46):
Https.
EW (01:05:46):
Fine. Whatever. Just-
CW (01:05:49):
<laugh>
EW (01:05:49):
Type it in to Google, or whatever. DuckDuck.
CW (01:05:53):
Just Google for embedded.fm.
EW (01:05:55):
You will find us. And then there is a contact link. Okay.
(01:05:57):
[Winnie the Pooh excerpt]