Embedded

View Original

471: Bicycle Built For Two

Transcript from 471: Bicycle Built For Two with Andrew Ikenberry, Christopher White, and Elecia White.

EW (00:06):

Welcome to Embedded. I am Elecia White, alongside Christopher White. Our guest this week is Andrew Ikenberry. Let us talk about teaching computers to sing.

CW (00:18):

Hi Andrew, thanks for joining us.

AI (00:20):

Thanks for having me. Pleasure to be here.

EW (00:21):

Could you tell us about yourself, as if we met at, oh, I do not know-

CW (00:26):

NAMM!

EW (00:27):

NAMM booths.

AI (00:29):

NAMM. I am very familiar with that. Definitely. My name is Andrew Ikenberry. I am an electronic musical instrument designer. I have also co-founded a couple of different music tech companies. I started my career when I was a senior at the Berklee College of Music in Boston.

(00:45):

While I was there, I co-founded a company called Qu-Bit, which makes modular synthesizers. Over the years, Qu-Bit has released somewhere over a hundred products. They have gone on to be used by a lot of incredible artists, sound designers, including people like Deadmau5, Martin Gore, Trent Reznor, and many, many others.

(01:07):

Then a few years back, I kickstarted a new company called Electrosmith, which makes the Daisy platform, which is what brings me here today.

EW (01:18):

All right. We are going to ask a lot about the Daisy platform, and some about some of the others.

CW (01:23):

Well, I want to ask about Deadmau5.

AI (01:24):

<laugh>

EW (01:26):

It is not Mouse Rat?

CW (01:28):

Mouse Rat? <laugh> No, that is something different. Never mind. <laugh> Let us go on to lightning round. <laugh>

EW (01:34):

<laugh> You were going to ask about Dead Rat. I am sorry.

CW (01:40):

<laugh> Dead Rat?

AI (01:40):

<laugh>

CW (01:40):

No, it is fine. I was just going to ask how he drinks through the helmet, but that is fine.

AI (01:46):

I am not privy to that information.

CW (01:48):

Yeah, that is fine.

AI (01:48):

<laugh>

EW (01:50):

Okay. Before I can screw up that band name again, let us go with lightning round, where we ask you short questions and we want short answers. And if we are behaving ourselves, we will not ask how and why, and are you sure, and what about. Are you ready?

AI (02:06):

Ready. Let us do it.

CW (02:07):

What is your favorite instrument?

AI (02:10):

Modular synthesizer.

EW (02:12):

That was kind of a gift.

AI (02:14):

<laugh> That is too easy. Come on.

EW (02:16):

Favorite input to a synthesizer, like keys, knobs, buttons, sensors?

AI (02:21):

It is going to have to be cutoff frequency. I know that is a little vanilla, but got to do it. Cutoff frequency to a filter.

CW (02:27):

Best booth you visited at NAMM, besides your own? Or best stage thing you saw there?Have seen there?

AI (02:33):

Great question. Let me see.

EW (02:36):

How could you choose just one?

AI (02:37):

Right, exactly. That is tough, but one does stand out. Novation had this booth one year with a whole floor- It looked like one of their Launchpads. All these different grids would light up, and you are supposed to dance on it. It was just an absolute blast. I spent way too much time hanging at their booth, just dancing on this Launchpad. <laugh>

EW (03:03):

Actually, could you describe what NAMM is? Because we have used it a couple times, but I am not sure our audience knows.

CW (03:07):

Oh, right. Yeah.

AI (03:09):

Yeah, so NAMM is essentially the- Definitely the country, if not the world's, largest music trade show. It specializes on the musical instrument and pro audio space. So you are going to see companies like- Well, you used to see companies like Fender, Gibson, Roland KORG, et cetera.

(03:27):

But it has obviously had a hard time since the pandemic. So your mileage may vary as far as which companies you see these years, but it is coming back. I was just there a couple of weeks ago.

CW (03:37):

And when you go there, it is not just the industry manufacturers and stuff. A lot of famous musicians are just wandering around there. So it is kind of a weird experience.

AI (03:46):

It is very interesting. Yeah. Ostensibly it is an industry only trade show, so you have to get a ticket. You cannot purchase a ticket. You have to get it through- In theory, you are getting it through your job. Your work is going to get tickets and then you are going to go negotiate- You are going to figure out what you want to order for the next year. See what upcoming products these brands have.

(04:09):

But what ends up happening is the real business takes place on Thursday and Friday. And then Saturday and Sunday, everybody within driving distance gets a ticket from their buddy who has a guitar store. And then they drive down and walk around for the day, and just check out all the cool stuff.

EW (04:23):

Okay, back to lightning round. Sorry about that.

AI (04:26):

<laugh> That is not very short.

EW (04:28):

Best Southern California beach?

AI (04:31):

I am going to have to go with the one I live nearest to, which is San Clemente. So let us go with T-Street.

CW (04:37):

Fish tacos, burritos or tamales?

AI (04:39):

Fish tacos.

EW (04:41):

Musician, engineer or CEO?

AI (04:43):

That is a tough one. I am going to go with my gut. Musician.

CW (04:47):

Complete one project or start a dozen?

AI (04:50):

Always one. Always one project.

EW (04:54):

Favorite fictional robot?

AI (04:55):

Bender from "Futurama."

CW (04:57):

And do you have a tip everyone should know?

AI (05:00):

Back up your files. <laugh>

CW (05:01):

<laugh>

EW (05:03):

Oh, a tip born of frustration and experience. It is like the tip where you should not catch a falling solder iron. <laugh>

CW (05:13):

Back up your files in two places.

AI (05:14):

<laugh> Exactly.

CW (05:14):

One of them not in your house.

AI (05:17):

Yeah. One is none, right? One is none. Never forget that.

EW (05:25):

<music> I would like to thank our sponsor this week. Nordic Semiconductor specializes in ultra-low power wireless communication. They have a wide technology portfolio including Bluetooth Low Energy, Low Energy Audio, Bluetooth Mesh, Thread, Matter, Cellular IoT, and Wi-Fi.

(05:44):

They have thousands of customers worldwide, with 40% market share in Bluetooth Low Energy. They have nearly two million system on a chips produced every day. Sounds like a system that you cannot go wrong with. So please check out nordicsemi.com.

(06:01):

If you would like to win one of the Power Profiling Kit IIs that our sponsor is so generously handing out, please send us an email. That is, show@embedded.fm and tell us what your favorite PPK2 feature or spec is. Again, thank you to Nordic for sponsoring this week's show. <music>

(06:32):

Okay. So we asked you to talk about Daisy, after one of my students used it in a project for my class. They were building a guitar pedal. But there were a couple other students who referenced it. So I get curious. Then I saw your tagline for your company is "Let us teach computers how to sing." Tell me all about it.

AI (06:58):

Yeah. What Electrosmith is all about is merging the world of technology and art. I think sometimes in the tech space, or definitely the maker world, we get a little too fixated on the process or the specific technology that we are using, and sometimes lose sight of the end result.

(07:18):

For us at Electrosmith personally, we are really not just programming computers. We are doing what we think is one of the most awesome human behaviors known to man, which is making music or singing, metaphorically speaking. So it sums up really our mission statement, which is obviously "Let us teach computers how to sing." I almost said that.

CW (07:41):

<laugh>

EW (07:41):

<laugh>

AI (07:41):

But it is really just merging these disparate worlds of technology and art.

EW (07:47):

One of the interesting things about the Arduino platform was that it made technology accessible to folks who were not computer scientists. Who were not engineers. Do you have the same philosophy? Or do you expect some level of technological understanding from your audience?

AI (08:08):

No, I would say we have a very similar philosophy. That stems a lot from my own personal experience, which is I grew up as a musician. I did not do particularly well in math or science classes. It was not because I was not studious. And it was not because I was not used to dedicating large amounts of time to learning something. It is just that I really did not get excited about it, and I did not see an application for it that made sense for me.

(08:33):

But meanwhile, I am sitting in my room playing guitar for eight hours a day and things like that, which I think a lot of musicians are used to. I think that there is unfortunately this disconnect between musicians and certain subjects like math and engineering, because they do not see an application.

(08:51):

One of our big, big driving forces is let us fix that. Let us show these musicians why it is important to learn these things. And not only that, let us enable them to create new instruments themselves. Because that is where when you really look at a lot of the innovation that has happened, it is always people who think outside the box.

(09:10):

Leo Fender was not necessarily just thinking about a cool product, he was passionate about music. That led him to build and create and design what he did.

EW (09:25):

How do you align you need to understand signal processing, which is Fourier, and how things work in modulation, with not having a science background?

AI (09:41):

That is a great question. Largely what we like to say is that you do not need anything to get started. You do not need to know any signal processing to get started. We have taken care of the nitty gritty, so to speak, and that is the beauty of the platform. Right? It is plug and play. Get started.

(09:53):

You do not really need to understand these advanced mathematical concepts, to just use our programming language, our code base that we have already made for you to use. A big part of that is we have an open-source DSP library, which is extremely accessible and anybody can use it. That way, they do not have to necessarily code up their own oscillator, they can just call our function, which does really all the heavy lifting for them.

(10:20):

Now, on the flip side, I get this question a lot, which is, "Well, I am a really experienced engineer, so am I going to be limited?" And the answer is, "Of course not." I mean, the sky is the limit. We are not going to hold you back. We just made it easier to get started.

(10:33):

The short answer to that question is you do not really need any knowledge of DSP to start, but as you learn and as you grow, it is going to come naturally. You do not have to worry about creating a baseline. You can just focus on the fun stuff, which is making sound and tweaking the algorithms.

EW (10:51):

Do you need a warning on your product, "This may cause you to become interested in the Fourier space and mathematical concepts"?

CW (10:56):

<laugh>

AI (10:58):

<laugh> We should. We should put something like that on there.

CW (11:02):

As is our brand, we have gotten a bit ahead of ourselves.

EW (11:03):

What?

CW (11:03):

Yeah. So what is this product? <laugh>

AI (11:06):

<laugh>

EW (11:10):

<laugh> Oh. Oh yeah. There is this Daisy thing. I heard is a board? Could you tell us about it, the specs and whatnot?

AI (11:18):

Most definitely. Daisy is an embedded platform for music, or any audio device really. It is not music specific. That just tends to be our bread and butter. What the platform consists of is development boards, which are similar to a Teensy, Raspberry Pi, things like that.

(11:35):

It has a ARM processor on it, an audio codec, that is one of the things that sets it apart, as well as a few other peripherals that make doing complex audio easy. One of those things being we have an external SDRAM chip on there, which gives you 64MB of RAM. So you can do loopers. Things with large buffers are inherently easy on the Daisy platform.

(11:55):

And then the second part to what the platform is, is our software ecosystem. What we like to think of ourselves as doing is just making it easy, so that you do not really have to spend a lot of time designing hardware. You also do not have to spend a lot of time installing tools, or creating a development environment. You can get up and running very quickly using our off the shelf solutions.

EW (12:17):

I saw that it can be programmed with the Arduino interface, and there was some sort of web thing that looks like it used an Mbed, like just drop a file in a folder sort of program on board.

AI (12:30):

Yeah, exactly. We support a wide variety of programming environments, all the way from Max/MSP or Pure Data on one side, down to low level C++ on the other. And of course somewhere in the middle there is the Arduino IDE, which we have full support for.

(12:44):

And then the webpage you mentioned, the "Daisy Programmer" is what we call it. You can drop in a binary file and then flash any Daisy hardware. Now where this becomes really, really powerful is that people can share firmware really easily. This is not just users necessarily.

(13:00):

A lot of companies take advantage of this, because they can just put Daisies in their products. And then all of their users have an easy and effective way to flash firmware, that does not involve getting a custom programmer or any weird cables or anything like that.

EW (13:13):

Yeah, you just plug in USB on your computer, it shows up as a drive, you plop in your file and it programs itself.

AI (13:20):

Exactly. Yeah.

CW (13:21):

As an embedded developer, supporting multiple ways of programming something sounds like a nightmare.

AI (13:25):

It is horrible. It is absolutely horrible. <laugh>

EW (13:29):

<laugh>

AI (13:30):

I will tell you, and this goes back to one of your lightning round questions, which is really good, is we have learned a lot of lessons about supporting too many things. That is just the honest truth, is sometimes if you support too many things, you decrease the user experience across the board.

(13:43):

So it is something we are working on moving forward, is narrowing down specifically what most users are using, and focusing more development time on that specific language. Rather than trying to get even coverage across the board.

EW (13:57):

That makes a lot of sense, because there is this layer of impenetrableness when you have too many options. It is like, "Just tell me what to do and I will do it, and then I can make my choices."

AI (14:08):

Right. Exactly. That is one of those things where it is a better product if we tell you, "This is where you are going to have the best experience," and that is what we give you. Not "Pick your poison. Pick one of these six different languages, and your mileage may vary as far as how well it works out for you." Yeah, we are definitely having these conversations daily as we move forward with support for the platform.

EW (14:34):

If I want to program it with the JTAG style programmer, SWD probably, can I do that? Or do I need to go through the other way?

AI (14:43):

You definitely can, and it is something we highly recommend. We get a lot of first time programmers coming on board using it, and they are using printf to debug. Quote unquote debug.

CW (14:54):

<laugh>

AI (14:55):

We always like to tell them, "Hey, there is a better way to do this. You just have to buy this little JTAG programmer debugger hardware, which we get directly from ST." It is called the "ST LINK MINI V3" something or other. It is on our website. That is what we recommend using.

(15:09):

And then we have built-in support to the VSCode IDE, for getting it running when you are writing in debugging code.

EW (15:17):

Okay. What is in the name "Daisy"? Is it- I thought you would have called it like "Moog Light," except that was probably trademarked or something.

AI (15:29):

<laugh> That is a great question. I am surprised when people ask me this, people that know so much about the history of electronic music and this and that. They are still like, "Why 'Daisy'?" It is kind of weird.

(15:38):

So "Daisy" comes from this song called "Daisy Bell" and in parentheses Bicycle Built for Two. Which is this old song from the late 1800s I believe. It was actually the first song that was ever played by a computer. It was programmed at Bell Labs by Max Mathews sometimes in the fifties, I believe. It is this cute little song, "Daisy Bell," you can look it up online.

(16:02):

But we really wanted to highlight the heritage of what we are trying to do here of making computers sing, and where it all started and hopefully take it to a whole new world. Obviously Max Mathews was using a computer that was the size of, I do not know, five houses or something like that to do this. And now we can do it on this tiny little stick of gum size dev board.

(16:22):

And a lot of people are probably familiar with this from the "Space Odyssey 2001" film, where HAL- They are about to shut him down, and he asks if he could sing a song. Then he actually sings the "Daisy Bell" song. It is this really touching beautiful moment in the film when the computer is singing the song, and then slowly kind of dying as he is singing it.

EW (16:45):

Yes, that is beautiful.

CW (16:48):

What!<laugh>

EW (16:48):

No. I mean it, yes. I wondered why I knew I could hear the song in my head as you mentioned it. It is because of that scene. Not because I actually knew anything about the song.

AI (16:58):

Right.

EW (17:00):

Okay. One of the things I noticed with projects I have seen that use the Daisy, is that there are a lot of wires. That seems to be-

CW (17:14):

A lot of people build things on breadboards. Yeah.

EW (17:15):

They build things on breadboards, but then you usually have a moving component. Because you have sensors in multiple moving components.

CW (17:25):

Potentiometers or-

EW (17:26):

Or buttons-

CW (17:26):

Keyboards.

EW (17:26):

Or keys or sensors of some sort. Then they go to the Daisy, and then something, and then audio comes out somewhere. Where does the audio come out? Let us go with that first.

AI (17:39):

Okay, so the audio is coming out of the onboard audio codec. This particular part, the one we are currently using, can go up to 192kHz on the input, 96k on the output. That is the sample rate. And up to 24-bit resolution.

(17:55):

That particular chip, that IC, is taking the digital ones and zeros and converting that to analog audio. And that is what you are going to actually listen to, is you are going to connect the voltage that is coming out of the dev board coming from the codec, to some sort of connector, which will take it elsewhere to your system.

EW (18:11):

Like a powered speaker or headphones?

AI (18:15):

Exactly. I actually have a funny story about the audio codec on the Daisy. I like to call this the "codec saga." <laugh> I tell everyone who will listen, some people get bored faster than others, but I think your audience will hopefully enjoy it.

(18:35):

Flashback a few years to 2020, and this is when we launched the Daisy on Kickstarter. It was right before what we now know of as "the chip shortage." Of course we did not know that when we were launching the Daisy, or we probably would have waited, I do not know.

CW (18:49):

<laugh>

AI (18:51):

Done something, maybe something different. But the original audio codec that we were using is the AK4556, which is made by AKM. They are a very high fidelity Japanese audio codec company. They specialize in audio chips. And we were very happy with it. Everybody was happy with it. We designed it on the board.

(19:07):

We got our first shipment of, I do not know, 5,000 or so. We had scheduled POs over two years, because we started to get really scared with the chip shortage and everything. Now, closer to the end of the year, I happened to see some headline online. It was probably some clickbait thing like, "Japanese semiconductor factory burns to the ground!"

CW (19:27):

Great.

AI (19:27):

And I am like, "Huh." And I click on it of course, and it says, "AKM." And now I am like, "Oh no, this is not good." "AKM factory burns to the ground, blah, blah, blah." And I am like, "Okay, that is terrible. But what are the chances that it is the exact same factory our chip gets made in? Slim to none, of course. Right?"

(19:44):

So next week I get my sales rep emails me, "Hey, factory burned down. So it is going to be six months before you get your next purchase order." Now this is a disaster for us. We are really young. We are just getting the platform going. We did not want to go out of stock, so we locked up all the open market inventory we could. Cleared out the catalog suppliers. If you could not get the AK4556, it is probably my fault. I am sorry. We needed them.

(20:06):

We did what we could, waited the six months or so that they said it was going to take to get the factory back online. And then we get another email and they said, "Hey. AKM actually looked at it and they decided to make this chip obsolete. So you are not going to get any of your purchase orders. Sorry."

(20:21):

They recommend using this other thing, which was not pin to pin compatible. It was some QFN package, and ours was some sort of TSOP package. So this was a huge nightmare for us. We were freaking out, we were never going to get any more. At that point, the open market stock had skyrocketed to ten times the price. It was just this crazy situation.

(20:39):

So what do we have to do? We got to design it out, of course. At that point we designed in a Wolfson codec, I think Cirrus now, they bought Wolfson. But it is the WM8731, which if you have ever made or looked at a Eurorack module, it is this ubiquitous codec in the Eurorack space. But it was also used on one of the early iPods. So it is a fun fact about that codec.

(21:01):

We designed that one in. Specs were a little bit worse, but mostly comparable. It fit most applications. We designed that one in. Everybody is happy. We schedule out our POs and we are sitting fine for a few months. The next thing we know Wolfson emails us, or Cirrus emails us, "Hey, this is going obsolete. Time for your lifetime buy."

CW (21:21):

<laugh>

AI (21:23):

We are flabbergasted. I just cannot believe the luck at this point. So of course, yeah, lifetime buy, we lock up, I do not know, 20,000 or so chips, right? Lifetime buy, we are going to be good. Around the same time the Daisy sales volume took off. So our lifetime stock wind up being like six months stock.

(21:39):

All of a sudden we are redesigning another audio codec onto our board. That is where we actually wound up looking at this TI codec, which is the one that is currently installed, which is the PCM3060. It actually has better specs than both the AKM and the Wolfson codec, which is great. But we had to do another redesign.

(21:57):

This just happened. This must have been, I do not know, six or seven months ago, we finally got that into production. And that is the current codec that is on the board. For everybody who has had to make minor changes to their code, I am sorry. This is the reason. Hopefully this will clear it up a little bit better than our forum posts did.

CW (22:14):

How different are they between- Are they not just- Naive. Are they not just nice I2S on one end and-

EW (22:20):

Voltage on the other?

CW (22:21):

Voltage on the other? <laugh>

AI (22:22):

Sort of.

CW (22:23):

I am sure there are config-registers and stuff that is-

AI (22:25):

There is a lot more stuff to it. For that reason, we have tried to simplify the hardware connections, and not take advantage of the extra things we can control digitally, and just make it simple.

(22:34):

Backwards compatibility is really important for us, of course. But each codec is just a little different specs-wise, which could also affect your noise that might pop up on different layouts. And then the output impedance is an issue. So those are things we have learned to deal with, because of having to switch so many times. But that is it. That is the codec saga for you.

EW (22:56):

And the chip shortage.

AI (22:57):

And the chip shortage.

EW (22:58):

But you managed to-

CW (23:00):

You managed to get a processor. <laugh>

EW (23:00):

Yeah.

AI (23:02):

Yes. That was at the time that the chip shortage really came into effect, and we saw our lead times pushing out. There was a lot of companies depending on the Daisy to ship their own products. So we looked at things and just bit the bullet, and locked up tons and tons of stock with the risk of being, "Hey, maybe we will not make it through all of this," of course.

(23:24):

But there were so many companies that- It was not just us depending on it. It was a lot of other companies that were depending upon that revenue. For me, in order to build trust in the platform, I thought it was really necessary to just double down and lock up the stock.

EW (23:40):

It is an Arm Cortex-M7?

AI (23:43):

Correct. Yeah, it is the H750 series.

EW (23:46):

Ah, is it-

AI (23:49):

Just so you know how bad it was. These were on allocation. So they would quote you like something around 52 plus weeks lead time, but that did not mean you were going to get them. Because at the end of that 52 weeks, if you did not understand how the whole semiconductor supply chain works, your sales rep is going to say, "Hey, yeah, you were not in the bucket, because you were not on the allocation. I am sorry. That is just the way the cookie crumbles."

(24:09):

Because what happens of course is they have to take care of their best customers, the semiconductor companies. That is why chips go in allocation. And that is the way it ends up happening, is the lead times go out the window.

EW (24:20):

They do not mean anything. They mean, "That is a minimum. And if you were not in line by now, you cannot have it until then." And being in line is not obvious.

AI (24:32):

Exactly. What most people tell me is, "Look, if they are quoting you over a year, it probably means they do not really know." <laugh>

EW (24:39):

Probably means, "Never."

AI (24:39):

Yeah, over a years is, "Go away."

(24:40):

It does not matter.

CW (24:42):

"Stop asking."

AI (24:42):

Who knows?

EW (24:46):

But that was an ST part, if I look at your schematics. Yeah, an STM32 part, those were chip shortage catnip - very hard to find.

AI (24:57):

A hundred percent. I can say we definitely developed a much closer relationship with ST Micro around that time. Not coincidental. <laugh> But I can honestly say we did not go lying down once. So we were very connected and taken care of with ST Micro. I cannot say enough good things about them throughout that process.

EW (25:16):

Good for you. So this started as a Kickstarter program. Then you fulfilled your orders. Then it continued on as a small business. And then it took off? Is that the right trajectory? Or- What can you tell me about how the business is going?

AI (25:35):

Yeah. I still am currently running Qu-Bit, the modular synth company. It is a big part of what we do. And the first customer of Electrosmith's to design the Daisy in, was of course Qu-Bit. So every Qu-Bit product that we have released for the last three years, three or four years, has used the Daisy at its core.

(25:54):

Then a lot of other companies in similar audio spaces, would be it Eurorack modular, effects pedals, desktop synths. They also started designing it into their products. The Kickstarter was very successful. It was above and beyond what we expected.

(26:08):

But the next huge bump of revenue actually came from companies, more so than the hobbyists. At least in the post Kickstarter time period, one and a half years. Once people started seeing it in all these products that they were using, that is when we saw a really big uptick for the first time post Kickstarter of the hobbyist and education space.

EW (26:31):

That is funny. Kickstarter worked then.

AI (26:33):

It did. It definitely did.

EW (26:37):

I started to ask you about the wires that I often see with the hobbyist use of the Daisy. Do you see that a lot? Do you have ideas for how to fix that? Is it just part of creating something?

AI (26:52):

I think part of it is the nature of audio, somewhat. It just tends to require lots and lots of cables. Anybody who plays guitar, or especially uses synthesizers, can attest to that. Not that that is not just an engineering problem at large. It is of course.

(27:08):

But the audio space, there are so many different protocols and different types of connectors and jacks and sensors that everybody wants to use, that it is just a little bit the nature of developing with this type of platform.

(27:21):

Now with that being said, a big thing that we have tried to do to mitigate that, has been making these breakout boards or example hardware designs, which will eliminate the need or at least minimize the need for doing a lot of breadboarding.

(27:34):

We will take common applications, say if you look at the Daisy Pod, this is just a line level breakout board for the Daisy. It has a headphone output, it has got a couple pots, a couple switches, couple jacks. It is line level audio, and then you just power it from your computer.

(27:50):

That is probably a Daisy 101 project. You are going to just break out the breadboard, wire up all these things. But why do that if you can just buy the Pod and eliminate all the wires that you are talking about? That is actually an extremely popular product, because many people are saying, "You know what? Screw the breadboard. Let us just start with that and I will not have to use all these wires."

EW (28:12):

That makes sense. And then there is a submodule and an init. What do these do?

AI (28:21):

Yeah, the Patch submodule is taking that same concept just a little bit further, because the Patch submodule is basically an entire Eurorack module. It includes power conditioning, CV input/output circuits, audio scaling circuits. Anything that you would have to do to make a Eurorack module goes away. It is all just right there on the board.

(28:39):

So you could actually, if you wanted to, you could just connect all of your jacks and your pots to your front panel, and then just hard solder wires directly to the Patch submodule board. And it is a functional Eurorack module. You do not need to know any parts. You do not have to do any hardware design or any circuit design whatsoever. So it just takes that concept and takes it to the next level.

EW (29:00):

Oh, I see. The Patch submodule does not need a Daisy plugged into it. It is a Daisy with different I/O configurations, and more stuff.

AI (29:11):

Exactly. It is just the Daisy Seed schematic, plus all of the circuits you are going to wind up doing anyways.

EW (29:17):

Okay.

CW (29:18):

So you have two additional things that I want to ask you about, because they go to my theory that ESD does not exist.

EW (29:25):

<laugh?

AI (29:25):

<laugh> I do not know about that.

CW (29:26):

You have the Field and the Patch, which are these really cool, big, enclosed-

EW (29:32):

With actual nobs and buttons.

CW (29:34):

Basically- Yeah, you turn them into a little desktop synthesizer module or a Eurorack module, with the front panel and jacks and knobs and things.

EW (29:43):

And they are not cheap.

CW (29:45):

And the Daisy just plugs into the front of them in a socket. So I worry about this Daisy in a bar on the floor, while playing-

EW (29:55):

<laugh> Next to Christopher's drums as the sticks are flying.

CW (29:59):

Or a- Well, yeah.

EW (30:01):

Or on the rug where he plays.

CW (30:02):

Exactly. <laugh>

AI (30:04):

Yeah, that is a valid concern. Our design goal for this was really to show what is possible with the Daisy, and make it easy to develop. We were not necessarily trying to compete with Strymon's reverb device.

CW (30:18):

Right.

AI (30:18):

If that makes sense.

CW (30:19):

Yeah.

AI (30:19):

Now they will probably fare better than you would expect maybe, in that sort of environment with all the lights on them and everything. They will probably sound better than you might guess. But at the end of the day, it was more of a development platform than an actual product as such.

CW (30:32):

Yeah. They are very cool though. <laugh> There is something cool about just having the electronics totally visible and outside.

EW (30:41):

Well, the Field has a case and the Daisy is on the outside, but other than that, it is enclosed. Everything else is relatively open and free to hang out. Do you have a- What did enclosures do to you as a child?

CW (30:54):

<laugh>

AI (30:59):

<laugh> That is a good question. Manufacturing enclosures is a headache and a nightmare. So if you do not have to, why bother? We are happy to help our customers design their own enclosures, which is more often than not what we are doing.

(31:14):

We do include a lot of mechanical files to ease that process for our customers. Like 3D models of the Daisy, and recommended cutouts for the screen, for example, things like that. But yeah, I am not a huge fan of enclosure manufacturing.

EW (31:31):

That is all right. "We will just include enough files. They can 3D print it themselves."

AI (31:35):

Exactly.

EW (31:37):

<laugh> Is that what most people end up doing?

AI (31:39):

It depends. It really depends on the audience. We sell to a lot of companies. Now they are not going to be 3D printing their enclosures. But then we sell to a lot of hobbyists, so they will make- Even for the Pod, people are making these really cool 3D printed enclosures. So we see everything. We see all different types.

EW (31:56):

I have a couple of listener questions I want to get through. First, Bailey asks if polyphony is supported.

AI (32:06):

It is definitely supported. You would probably be amazed at how many voices of polyphony you could get on the Daisy. It is a really common application. Anyone familiar with the CHOMPI sampler that just got released?

CW (32:21):

Yes, I saw that.

AI (32:22):

Yeah. So that has a Daisy inside-

CW (32:23):

Oh. Okay.

AI (32:24):

And that has- I do not know the exact number of voices of polyphony, but it has a lot. And it is doing a lot of effects, and a lot of other stuff, and it runs just fine.

CW (32:32):

It is kind of amazing. Audio is not a difficult task for modern microcontrollers. I have some synths, like this one here, the Blofeld. I do not think that has got anything very exciting in it, but it is a sampler with- It has tons of voices. I think Daisy probably has way more processing power than-

EW (32:50):

But we saw the Wiggler, which was a Daisy-

CW (32:52):

But that was a choice to just make it a monophonic. Yeah.

EW (32:54):

Yeah. Okay.

CW (32:55):

That was their choice.

AI (32:57):

But that has been a big trend too. You mentioned modern microcontrollers are so good at audio, and they are. What we have really seen happen in the last five to ten years, is we do not have to use purpose-built DSP chips anymore. It used to be you would get your SHARC DSP on there, and then you would still have some other microcontroller handling all of your controls and your ADCs and all that stuff.

(33:15):

But the cool thing about the Daisy, and one of the reasons we chose the particular ST microprocessor we did, is that modern general purpose microcontrollers are so fast. You can just use those and they come with everything else you need anyways. Our chip actually has 16-bit ADCs on board, which is extremely unusual for- Even general purpose microcontrollers do not always have that.

EW (33:37):

Well, it does not have any DSP blocks, but it has got a lot of DSP specialized code.

AI (33:43):

Right.

EW (33:44):

And that is under the code that you provide.

CW (33:47):

Oh, like Arm's-

EW (33:49):

CMSIS.

CW (33:50):

Neon stuff or whatever.

EW (33:51):

Yep. Yeah.

CW (33:51):

Yeah.

EW (33:53):

Another question. From Bartholomew, "Are gold-plated contacts worth it?"

CW (33:58):

<laugh>

AI (34:02):

<laugh>

EW (34:02):

This should have been a lightning round question.

CW (34:03):

No, but oxygen free cables definitely are.

EW (34:05):

<laugh>

AI (34:08):

<laugh> Yeah. I do not know how far down this road I want to go, because it is a polarizing, dangerous topic.

EW (34:14):

<laugh>

CW (34:14):

So to speak.

AI (34:15):

So let me split the middle, okay. We only use ENIG boards, so that is a gold-plated surface finish on our PCBs. They are substantially superior and better in every single way, than all the other options. HASL, what have you.

(34:30):

Now when it comes to hi-fi audio setups, I think there have been double blind studies where 99% of people cannot hear the difference. Take that for what it is worth.

CW (34:45):

Between a garden sprinkler controller cable and a $10,000 gold cable.

AI (34:50):

Exactly. Yeah, exactly.

CW (34:50):

<laugh>

EW (34:50):

<laugh>

AI (34:52):

That is my two cents on it. So sometimes it matters, but most of the time probably not.

EW (34:56):

Well, you are already doing things at 96k, and if anybody can hear above 40kHz, just let us know.

CW (35:05):

That all gets filtered. There are reasons to do computations at that speed, but not- Anyway. I had a thought, what was it? Oh, well.

EW (35:13):

Back to Bartholomew. "What are your recommendations for strain relief for corded things?"

CW (35:19):

Oh, yeah.

AI (35:20):

Gosh, I do not know that I am the person to ask that question to.

EW (35:24):

Okay.

AI (35:24):

Yeah. I would say if you are performing on stage, definitely pull your cable through your strap and then plug it in.

CW (35:32):

Yes! <laugh>

AI (35:32):

<laugh> But I think that is about the most expertise I can pull on that area.

EW (35:37):

You mentioned playing guitar in high school, and then you went to music college. But you did not major in guitar.

AI (35:49):

It is complicated, because at the time I went to Berklee, you had to have what was called a "principal instrument," and this could not be a computer. They have since changed that, which I think is incredible. But at the time I went there, you had a major, but that was still separate from your principal instrument.

(36:03):

So I was both a guitar principal, as they say, and then I was also majoring in electronic production and design. So I did both, and I spent a lot of time playing guitar in college.

EW (36:13):

Cool.

CW (36:15):

That leads me to the question, you went to Berklee School of Music. You obviously did some computer stuff there, for music production and stuff. But how did you go from that, to working on designing boards and firmware, and working with microcontrollers and stuff?

EW (36:36):

Signal processing?

AI (36:37):

Yeah, so at the time when I was at Berklee, I knew I wanted to do electronic music, and I knew I loved synthesizers. I did not have any yet, because I was still playing guitar.

(36:49):

But for me, the thing about guitar, it morphed into this pedal experience. And it was not really about playing at a certain point. It was really about just tweaking sound and creating new timbres with the pedals. And not necessarily utilizing the guitar primarily. So my head was kind of already in this space.

(37:06):

Then when I got to Berklee, I took a class on Csound, which is a programming language specifically for music. It really got me thinking that the ultimate way to make electronic music, is to actually make your own instrument, and then make the music with the instrument.

(37:22):

In hindsight, this is kind of crazy, and I do not really recommend this to a lot of people, because I think you can-

CW (37:28):

Only suckers buy their own instruments.

EW (37:29):

<laugh>

AI (37:30):

You know what I mean. Yeah, exactly. That is just so wild. That is so inaccurate and not true. And I think it kind of shames like, "Oh, do not use presets." And that is just- Come on. Aphex Twin uses presets, and he makes them sound amazing. Let us not get upset about presets.

CW (37:42):

Life is too short not to use presets. <laugh>

AI (37:43):

Exactly. But for me, I think this led to- Even though my thinking was a little flawed, I think it led to a very interesting career. Because I got excited about learning how to write code and how to make electronic circuits, specifically to make my own instruments, which I could use for composition.

(37:59):

So it started with Csound, it went to Max/MSP, and then I started getting turned on to the embedded stuff. I started learning, of course, Arduino, Makey Makey, very entry point stuff.

(38:09):

But then one day I was in a class with Dr. Boulanger at Berklee. Everybody is very familiar with his work. He held up this small rectangular board, and he said, "This is the Raspberry Pi." This must have been, gosh, 2011, 2012. It was basically right when it first came out, so nobody had heard about it.

(38:30):

He said, "The reason that is important is because it runs Linux. And of course on Linux you can run Csound, among a number of other open-source music programming languages."

(38:38):

So that really got the light bulb going off in my head, which was, "Okay, this is $35 and it can run Csound, and it is already embedded. I can write Csound code, and I know these modular synthesizers cost like $300 a piece. There might be something there."

(38:56):

That is really what kickstarted my whole career, was being crazy enough to think that, "Sure, I will just figure that out, and learn some engineering enough to release a product." And I did. And people wanted it.

(39:06):

It was really a fortuitous time in the industry, because it was really early on for Eurorack synthesizers. There was almost no digital designs at all. So when we came out with this Raspberry-

CW (39:16):

Oh really. Huh. Okay.

AI (39:16):

Oh yeah. Back then people were basically "analog" and "modular" were like synonyms. It was the same thing for them. And all these people hated digital. <laugh> Honestly, they really did not like it at all.

(39:31):

So when we came out with our product, sure there was some kickback. But most people were just craving these more advanced synthesis techniques, that they had never had access to in the hardware domain.

EW (39:43):

A question from Tom Anderson. Comment first, "I have had good results with the Daisy, calling the API from C++. Do you have any advice for guitar builders who are looking at building it into their guitars?"

CW (39:57):

Mm-hmm.

EW (40:00):

"The power draw is about 1W," question mark. I do not know- "What is the best way to get rid of the low-level 11.5kHz tone at the output?"

CW (40:09):

<laugh>

AI (40:09):

<laugh>

EW (40:09):

I feel like, Tom, maybe you need to go to the forum for that part.

AI (40:13):

<laugh>

EW (40:14):

But let us go with the first part. Advice for guitar builders.

AI (40:16):

Yeah, no, I will hit both, because these are both really great questions. So the first thing to note, effects pedals are extremely sensitive to noise. It is crazy. This is something we had a little more experience on the synthesizer side of things. So when we started doing effects stuff, it was like a wake up call, like, "Whoa, this is a whole different ball game."

(40:34):

First of all, let us just talk about you are using these single coil pickups, which is a crazy archaic technology, which almost just inherently have hum. Then you are magnifying it massively to get up to effects pedal level. Then you are sending it through all these things. And then you are running it through an amp, which is again, another crazy piece of technology. So it is just more sensitive, in a nutshell.

(40:55):

So you have to be extremely careful, specifically when it comes to doing a layout. PCB layout becomes imminently important, in a way that it is not in a lot of other applications. I know that is not great advice for people on the breadboard, but it is the truth at least. So let us start with that.

(41:11):

Then let us go to the second point. So how do I get rid of- He is saying 11 something kilohertz tone. The more common tone that people are going to be talking about is a 1kHz tone. The dreaded 1k tone.

EW (41:22):

Ooh. That is a drone. That is not good.

AI (41:26):

<laugh> So the thing about the 1k tone is it is being caused by your audio callback. So every time your audio callback function finishes-

CW (41:35):

Oh my God! <laugh>

AI (41:36):

It is going to pull extra hard on your ground. There is going to be this big current spike. It is going to create a transient on your power supply. This is going to work its way through to your output circuits, and you are going to get what more commonly is a 1kHz tone, if you are using a standard block size. Right?

EW (41:52):

Capacitors.

AI (41:53):

Right. Exactly.

CW (41:54):

Well...

AI (41:54):

The technically correct answer is, "Okay, well let us look at your ground scheme, and let us get all these caps, and let us do some pi filters, and let us do all this." Okay, yes, that is true.

(42:03):

But the quickest, and someone would say maybe dirty, way to fix this, is if you are not maxing out the Daisy's capability, which most people are not, take your block size- Now, this is the number of samples in your callback. But it is just in libDaisy if you are using our code, just change the block size. Bring it down to a really, really small number. Set it to "2".

(42:23):

Now, what that is going to do, is it is actually going to take that tone, and it is not going to make it disappear, but it is going to make it so high frequency that humans cannot hear it. So for 90% of your applications- If you really need those cycles, we can have that conversation. It is more involved. But if you are really not pushing every last cycle on the Daisy, just change your block size and it is going to go away.

EW (42:45):

Because you are pushing it above human hearing.

AI (42:48):

Right. We can only hear 20Hz to 20kHz. And let us be real-

CW (42:50):

Twelve.

AI (42:50):

Most of us can only hear up to like 15k at the best. Yeah.

CW (42:54):

Been in a rock band.

EW (42:55):

He played drums.

AI (42:55):

It is depressing going to those hearing tests, I have done it. It is really depressing.

CW (43:01):

If you add a lot of filtering to get rid of a tone in your signal chain, you are also affecting your signal. And 1kHz is kind of an important frequency range.

EW (43:10):

Oh, that is an important frequency range. But above about 15, you are pretty good smashing those.

AI (43:16):

Exactly. We have talked to a lot of engineer engineers that do not have experience with audio, and they are like, "Well, yeah, just get a brick wall filter right there at 1k." And we are like, "Yeah. Well, I do not think you understand." <laugh>

CW (43:24):

<laugh>

EW (43:25):

That would sound interesting.

AI (43:27):

It is kind of important, that little frequency band.

CW (43:29):

Maybe Tom's problem is he has set the block size to "4". <laugh> Or "8", I do not know.

AI (43:32):

Yeah. So I start with block size, and then move on to layout and things like that.

(43:37):

I would mention here too, and I know a lot of people on the forum and Discord already know about this, so it is only sort of secret. We have a new form factor of the Daisy coming out. It is called the "Seed 2 DFM." "DFM" stands for "design for manufacturer." This particular form factor, it has already been installed in a bunch of products for two years, so it is very field tested. We are just wrapping up the documentation.

(43:59):

But the reason I bring it up now, is because it has differential outputs instead of single ended outputs. In the effects pedal environment, this pretty much just eliminates all of your issues. So we have got to the point where, if you are running into the tone and you cannot really adjust your block size, and you do not really want to mess with the layout too much, just use this other form factor. Because it is going to basically solve the problem for you.

EW (44:27):

When I asked you in lightning round about musician, engineer or CEO, you took a pause. When you go in on Monday, what are you most excited about working on? Is it the hardware? Is it the structure? Is it planning, software, music, signal processing tools?

CW (44:48):

<laugh>

EW (44:48):

What is it that you like to do with your company? And with Daisy?

AI (44:57):

I would answer this question differently depending on the month you asked me, I think. Part of that is just that the needs of the businesses changes over time. And depending upon which business I am focusing on, it changes too.

(45:09):

Because of course, when I am wearing my Qu-Bit hat, we are usually just designing instruments. I tend to get really excited about the DSP and the musical applications, and what the users are going to be able to do.

(45:18):

Then when I come over to Daisy, it has to shift a little bit. Let me answer it with my current obsession. This might sound kind of lame, but my real obsession and what gets me the most excited at work these days, is the actual manufacturing process of the Daisy boards.

(45:36):

Not a lot of people know this, but we actually do the assembly ourselves. We have an SMT line in shop, in house, and we do everything. Everything except fab the boards or make the chips. So this has been something that I have just been endlessly tweaking.

(45:51):

But lately what happened was our Daisy volume got so high, that we actually had to install a new SMT line, just to handle Daisy boards. It gave me a chance to from the ground up structure an entire manufacturing paradigm, that was really tailored and tooled up specifically for only two or three boards, which makes it extremely powerful.

(46:10):

If you have any manufacturing experience, the fewer SKUs you are dealing with, the easier it is to achieve great results. So currently, this week, that is what I have just been obsessed with. Getting the reflow oven profile just right for the Daisy. Or evaluating all the different solder paste options. And things like that.

EW (46:29):

Okay. "Manufacturing engineering" was not on the list.

CW (46:34):

<laugh>

AI (46:34):

<laugh>

CW (46:34):

Why did you bring assembly in-house?

EW (46:37):

And actually doing this in San Clemente in southern California?

AI (46:40):

That is right. Yeah. We are doing it here.

EW (46:41):

Wow!

AI (46:41):

There is a lot to say. Let me give you the short answer. When it comes to manufacturing my own products, I am a little bit too much of a control freak. Maybe? <laugh> But it really is about the fact that to me, Daisy is not just a schematic.

(46:57):

I get a lot of companies coming to me, and they want to license the schematic. Because they just want to take advantage of the circuit design, and they want to take advantage of the code base.

EW (47:07):

Right. If we are going to build it in, we will just build it in. Not buy your board, and then to have to deal with connectors. Yeah. Okay.

AI (47:15):

But to me, Daisy is so much more than that. Knowing how it is built and being able to say, "I personally stood there and tweaked that reflow profile. I know what your field defects are going to be like. They are going to be very, very low, if not non-existent, because I tried so hard to improve it."

(47:31):

Not only that I tried so hard originally, it is that we take every Daisy defect we find, we work at improving it. You cannot really improve it, if you do not have control over the manufacturing stage.

(47:41):

So it creates this closed loop system, where all the defects get fed to the engineers, which then gets fed to the manufacturing team, which then improves the defects, but then- So on and so forth. You know what I mean? So it is really this process of constant improvement, which enables the platform to get better every single year.

(47:59):

If we outsource manufacturing to some nameless overseas company, we are not going to get that. I know we are not going to get that. And it might not even be consistent from production run to production run.

EW (48:10):

And that is important.

AI (48:11):

Of course. 100%. Especially with all the companies that are relying on us to put the Daisy inside their boards, we really do not have any margin for these things failing. The reputation of the platform depends on how well it was manufactured. So at the end of the day, do we really want to give that out, trust that with somebody else? Probably not.

EW (48:32):

So we have been talking about the hardware itself, but the other big component to the Daisy platform is the software. How much of that do you do, versus how much do you have engineers who do it, versus how much is community managed?

AI (48:46):

I was very lucky early on to have engineers come on board, that are way better than I could ever hope to be. So I do very little to no programming these days. I have fantastic engineers that do a great job on that.

(49:02):

As far as the codebase overall, I would say a good 90% of it is still internal firmware that we are writing, software tools and whatnot. But we do have- If you check out our GitHub repos, we do have a lot of community engagement and a lot of external contributors. Which are really helping take the projects to the next level, and keep us on task with that as well.

EW (49:31):

What kind of projects do you see Daisy being used in, other than Qu-Bit and the things you work on yourself?

AI (49:39):

Of course there are the modular synths, that is the bread and butter. The effects pedals. The desktop synths. There have been a few really interesting areas that have gotten me really excited lately.

(49:47):

One of them is in educational space. We have seen it being used in courses for all sorts of things. Whether it is creative coding, intro to programming, intro to hardware design.

(49:59):

It is being used at RISD, which is actually just a design university. It is not an engineering college as such. They are using it to teach how to design modular synth modules. So it has really branched out to all these different areas. That is one area that has been particularly exciting for me, just because of my passion for education, and especially in the non-engineering space.

(50:20):

One other project that popped up on the forum lately, which I hope continues. And if anybody out is an amateur radio operator, a ham radio nut, please make things with the Daisy. It is such a great platform for that.

CW (50:33):

Huh! I was about to ask you, have there been non-musical signal processing applications?

AI (50:39):

Not a ton. But I did see a software defined radio, an SDR project popped up last week. I am a ham. I am KN6TAU. If you see me on airwaves, that is me. So that gets me really excited, because it is not music specific. Now it is just audio. It is so powerful, but I do not think it has gotten the notoriety in that space that it really deserves.

EW (50:58):

There are a lot of things that need signal processing.

CW (51:01):

There is a ton. Yeah. Well, in audio range. Right?

EW (51:04):

Yes.

CW (51:04):

Which is still a lot.

EW (51:07):

Yes.

CW (51:10):

No, that is a cool platform.

EW (51:13):

Sorry, I have gone off into design land myself, so that is not good radio for sure.

AI (51:18):

<laugh>

EW (51:18):

So Christopher-

CW (51:23):

Uh oh.

EW (51:23):

Drummer. He plays bass guitar.

CW (51:28):

Yeah, that is fine.

EW (51:28):

Plays some guitar. He plays piano. He does embedded software, but he hates it.

CW (51:36):

<laugh>

EW (51:36):

So take that with a grain of salt. He is the co-host of this show, and he does all of the "making us sound good" part of it. What should I get him from your website?

CW (51:50):

<laugh>

AI (51:51):

You should get him the Field, without a doubt. Despite his fear of the exposed PCB, I think it would still suit his purposes best.

EW (52:05):

It has a lot of buttons, and he does really like buttons.

CW (52:08):

Oh, and they light up. I like light up buttons.

EW (52:09):

Yes.

AI (52:10):

Got light up buttons. Got lots of those. Got lots of LEDs.

CW (52:12):

<laugh>

EW (52:13):

INs and OUTs, and cables-

AI (52:14):

INs and OUTs.

EW (52:14):

And- Yeah.

AI (52:16):

That is the big thing about the Field too, is it can process audio, but it could generate audio. So if you are both a piano player and a guitar player or a bass player, you could run your bass through it or you could connect your MIDI keyboard up to it.

EW (52:31):

He has a few of those. Sorry.

CW (52:34):

<laugh> Stop looking at my desk! <laugh>

AI (52:34):

<laugh>

EW (52:37):

<laugh> We are in the studio, which is surrounded by instruments and whatnot. Do you get to play music very often?

AI (52:45):

I always hate this question, because I think I disappoint people so much when I answer it. But I will be honest, I will be honest with you. I work a lot of hours, and when I get home after a ten, 12 hour day, messing with synthesizers is not always top of my list. So I do not really Patch as much as I used to, or nearly as much as I used to, when I get home.

(53:06):

But I have actually gotten into playing the ukulele, and it has been really interesting. It has been this therapeutic way to reconnect with music, in a way that does not use electrons. It has been this, I guess cathartic, it is like this real therapeutic thing I could do on the weekend.

(53:24):

So I do play music. I am making a lot less electronic music than I used to, which I am working on. But I have been playing a little ukulele.

CW (53:32):

I am a huge advocate of low friction musical instruments, that you could just pick up and that are fun to play. Do not require a lot of technique necessarily. <laugh>

AI (53:42):

Sure.

CW (53:42):

So when you are tired, you can actually still do something. Yeah.

EW (53:45):

Well, there is some push and pull. If you spend your day- Like, I do some writing and I enjoy writing, but if during my job I am doing a lot of writing- I am finishing a technical book now, so I am doing no writing for fun. But if my job was all programming and hardware and I did not have any writing outlet, I would probably write for fun. And I have in the past.

(54:08):

So there is some- If your job is all about making instruments all the time, going home to think about instruments- You might still get to think about music, but thinking about instruments, probably that part of your brain may get tired. Some other part of your brain needs a turn.

AI (54:28):

Exactly. And avoiding burnout is so important. I have been really excited to see more talk about this since the pandemic, but it is extremely important. And I think when you make passion your career, it is even more dangerous for this very reason.

EW (54:44):

Yes. Well, I usually say, "If you make your hobby your career, then what are you going to do in your free time?"

AI (54:50):

Exactly. <laugh>

CW (54:52):

Or what are you going to do when you burn out, and now you do not like your hobby anymore, or your career?

AI (54:58):

You do not have either. Exactly. <laugh>

EW (55:01):

Do you have any advice for staying away from burnout? Or after you have gotten into it, getting out of it?

AI (55:08):

That is a good question. I would say one thing is it is best to think of it as a marathon, not a sprint. For me, whenever I do find myself in the throes of burnout, it is good to reconnect with what got you excited about it in the first place. Sometimes I will try to find an instrument that I did not design, or I did not have any connection to, that is exciting. And play with it.

(55:32):

I have mentioned this before a couple times, but Ciat-Lonbarde is this modular synth manufacturer, who makes things that are so wild and different than what I normally make. There has been a few times when I get super burned out, I will just plug one of those in and play with it.

(55:44):

It is just so different from what I am used to, that it reconnects me. Because I get excited about electronic sound, but I do not have to play with my own stuff. And that helps.

EW (55:54):

Yeah. It does help. Because if you are playing with your own stuff, you see bugs in it.

AI (55:58):

Oh, I see the bugs. I battle all day at work, and then I have to play with them at home! <laugh>

EW (56:01):

Or even if things are going well and there are no bugs, you still have to be thinking about how a customer might approach this. If you are the customer, there is some critical part that can turn off.

CW (56:14):

There is definitely a novelty factor to things.

EW (56:15):

Oh, absolutely.

CW (56:16):

Especially with music, I find. That is why Gear Acquisition Syndrome exists, right?

EW (56:21):

<laugh>

CW (56:21):

It is like, "I am bored with all this stuff, but if I get something else, then maybe I will be inspired." And like you said with the modular thing you were playing with. But if you are so close to something and you are building it, it would be hard to get any separation from the concepts even to- Yeah, it would be easy to get lost in that, it seems like.

EW (56:42):

Well, Andrew, it has been wonderful to talk to you. Do you have any thoughts you would like to leave us with?

AI (56:50):

Yeah, I would like to share one of my favorite quotes, which is from Arthur C. Clarke. It says, "Any sufficiently advanced technology is indistinguishable from magic." Now for me being a lifelong musician, growing up, music was magic, pure magic. It is just endlessly thrilling to me.

(57:09):

Now coming into technology a little bit later in life, this combination just blew my mind and continues to do so. Being able to come to work every day and combine the magic of music with the magic of technology is really just an absolute dream.

EW (57:27):

Our guest has been Andrew Ikenberry, electronic instrument designer and music tech entrepreneur. Andrew is the founder of Electrosmith Qu-Bit and 2hp.

CW (57:38):

Thanks, Andrew.

AI (57:39):

Thank you.

EW (57:41):

Thank you to Christopher for producing and co-hosting. Thank you to our Patreon listener Slack group for their questions. And of course, thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm.

(57:55):

I cannot really think of a quote to leave you with. So I guess I should probably tell you that Electrosmith has offered us 5% off, if you type in "Embedded FM" in their coupon code. This is good for a month after the show goes up. So if you are hearing this before about mid-March and you want to check it out, 5% off, Embedded FM.