356: Deceive and Manipulate You
Transcript from 356: Deceive and Manipulate You with Leonardo Laguna Ruiz, Elecia White, and Christopher White.
EW (00:00:06):
Welcome to Embedded. I am Elecia White, here with Christopher White. I think this week we should spend a bit of time talking about filtering, signal processing, music, and programming languages. Our guest is Leonardo Laguna Ruiz.
CW (00:00:23):
Hello Leonardo. Thanks for joining us.
LLR (00:00:25):
Hello, all Embedded listeners.
EW (00:00:27):
Could you tell us about yourself?
LLR (00:00:31):
Yes. I'm Leonardo, I'm a PhD in electrical engineering and I kind of live a double life. During the day, I work as a software engineer in Wolfram Research, but during the night, I work on my own projects. Which is, I have a small company called Vult where I do mainly things related to sound synthesis.
EW (00:00:59):
What does Vult mean?
LLR (00:01:00):
Vult? I got the idea, like, from some crazy dream where I have visions of vultures. And when I started developing these things and I needed to set the name, I said, "Wow, Vult sounds good", so for vulture.
CW (00:01:21):
Huh.
EW (00:01:21):
Okay.
LLR (00:01:23):
And it has been there for a while.
EW (00:01:28):
Well, I think we're going to do lightning round now and you're a listener, so you know how this goes.
LLR (00:01:34):
Yeah. And I'm nervous.
CW (00:01:35):
Alright. Ready?
LLR (00:01:37):
Ready.
CW (00:01:38):
Favorite chord.
LLR (00:01:41):
F sharp.
EW (00:01:42):
CV or MIDI?
LLR (00:01:43):
CV, definitely.
EW (00:01:47):
What is CV?
LLR (00:01:48):
Control voltage, yeah.
CW (00:01:51):
Favorite instrument of all time.
LLR (00:01:54):
Hmm. I think it will be definitely the guitar. Especially the Les Paul model. I really like that model.
EW (00:02:04):
Favorite fictional robot.
LLR (00:02:07):
This is a difficult question. I will say that my favorite is Ava from the movie Ex Machina, because it is like the only robot that, the only acute robot that shows their real intentions, which is to deceive and manipulate humankind.
CW (00:02:31):
Software or hardware?
LLR (00:02:34):
I cannot pick one. I think I would say both.
EW (00:02:37):
Complete one project or started a dozen?
LLR (00:02:41):
I used to be the kind of person that started a dozen, but in recent years I have moved to starting only half of those and finishing one.
CW (00:02:51):
What is the worst eighties synth song?
LLR (00:02:57):
I don't know. Since I mean, I was born in the eighties -
CW (00:03:03):
Ah, ok.
LLR (00:03:03):
And I'm pretty sure that there are lots of songs that are really bad and I never heard, so I only know the good ones.
CW (00:03:10):
Very good, very good.
EW (00:03:15):
Okay. Let's get on to the longer questions 'cause some of these require whiteboard. And so we're going to have to talk around them.
LLR (00:03:21):
[Affirmative].
EW (00:03:21):
You said that the vultures told you to make filters. No wait, that wasn't quite what you said, but something like that.
LLR (00:03:33):
Yeah. Something like that.
EW (00:03:34):
What do you actually do in your spare time or in your non-work time?
LLR (00:03:40):
I like to do a lot of sounds synthesis projects and the filter part. It came a little bit late. So before I was just trying to program like, my small microcontrollers to do wave generations, et cetera, but then one thing that I have seen while doing these projects is that the filters are almost never done digitally.
LLR (00:04:10):
Because, so from what I've read, they always say, do your sound generator digital, but do your filters analog. And I was wondering why? Why the filters are done always analog. And then I fall into this rabbit hole of filters...and I started making a lot of analog filters and modeling filters. And that's how I ended up in recent years, making mainly filters, digital filters.
EW (00:04:44):
Christopher's brother, Matthew, plays guitar, and has a large number of amps and says things like "These tubes are the best, and I can only get them from Russia and they're marked radioactive or sensitive."
CW (00:05:02):
You're just making stuff up.
EW (00:05:02):
I am totally making stuff up. But he is very focused on the tube sound.
CW (00:05:08):
Yes, he likes analog stuff. Yeah.
EW (00:05:10):
Why do, I mean, does it really sound different? Is this like having gold Ethernet cables?
CW (00:05:15):
Not quite but let him answer.
LLR (00:05:20):
I mean, probably, yeah, there is part of, it's related to the gold Ethernet cables. There's one thing that analog components have. They are imperfect and they tend to sound more pleasant than the digital counterparts. So if I make a guitar amp that uses the lowest distortion, the best operational amplifier or integrated circuit and I plug my guitar, it's going to sound really nice, I mean. But it will not have that charm.
LLR (00:06:05):
So it's a bit like eating boiled potatoes and eating French fries. The boiled potatoes are, I mean, they are good...They're practically the same thing, but for some reason the French fries taste much better. And this, well, in...the specific case of the tube amplifiers, the fact that these amplifiers can introduce small imperfections to the sound makes the sound specifically for the guitarist more interesting.
EW (00:06:46):
So amplifiers amplify things based on the name and filters, filter things, which seem kind of like the opposite things to do. Why do we talk about them together?
LLR (00:07:04):
So I mean, filters, they remove frequency so we can think about them as amplifiers for frequency, right?
EW (00:07:16):
Yeah, that makes sense.
LLR (00:07:16):
And in the case of synthesizers, filters are used in one specific kind of synthesis, which is called subtractive synthesis, in which you start with the waveforms...that are reaching high, have a high content of harmonics, but then you can put filters on them to remove some of those harmonics or to shape them, just to get the kind of sound that you want. For example, I'm thinking in a very basic synth sound, let's say that I want to make a sound of a piano. It is basically, what I need is to model first the impact of the sound when I press the key and the string is hit -
EW (00:08:13):
That's like the attack, right?
LLR (00:08:15):
Yes. The attack, and for that, I need more harmonics to have like this initial explosive sound.
EW (00:08:23):
Is this actually a high frequency sound? I mean, it's got all of the frequencies.
LLR (00:08:27):
Yes. And as the string resonates, the higher frequency harmonics are being consumed and then you get only like the fundamental oscillation of your stream. And that is very close to a sine wave. So if I would want to imitate that it will be possible to use...an oscillator that produces more harmonics. Then I use an envelope to just like open the filter during a small period of time, and then closing the filter gradually in order to get like that sound...the release sound of the piano.
CW (00:09:20):
So one of the things about synthesizers is it's not just, you have a filter and you set it to, "Okay, here's this notch I'm going to remove, or here's this set of low frequencies I'm going to remove." You actually manipulate the filter in real time as the sound evolves.
LLR (00:09:34):
Exactly.
EW (00:09:35):
To like change the envelope -
CW (00:09:37):
Yeah.
EW (00:09:37):
- over time, even, so that if you press the same note five times, it sounds different?
CW (00:09:43):
Even within the same note hit, it might open up.
EW (00:09:48):
Okay.
CW (00:09:48):
Or close.
LLR (00:09:50):
Yeah, that's exactly, one big difference between the kind of filters that we study. For example, in electrical engineering, when you design a filter, or when you read a text on designing a filter, you pick a frequency, you select your components and you implement your filter, but...the musical filters need to be controllable, and they also need to be controllable in an exponential way.
LLR (00:10:24):
That way it matches the way the notes change. For example, a C note in a piano, you have many C notes in a piano. And the difference between those is that each note...it doubles the frequency, right? Each octave -
EW (00:10:52):
Right.
LLR (00:10:52):
- doubles the frequency and in the case of the musical filters, you want to control them in the same way. So you don't say, "Right now...I'm in 100 Hertz. And then I'm in 110." The relation between the controlling voltage and the frequency is this exponential. So it sounds nicely...in a musical scale.
CW (00:11:20):
And you can go pretty far with that too, towards the point where you're only playing the filter. Like, on one of my synths, I can set the filter to get into a kind of self-resonance.
LLR (00:11:31):
Yes.
CW (00:11:31):
And then control it with the keyboard. And so it's the only thing that's actually, it's getting some noise from inside whatever's left of the synthesizer and that's resonating in the filter, but the filter's tuned with the keyboard. So you're actually basically just playing the filter.
EW (00:11:48):
So in an electronic and electrical engineering term, this is like, terrible.
CW (00:11:54):
Yes.
LLR (00:11:54):
Yes.
EW (00:11:54):
I mean, this is the opposite of what you want, but because you're doing music and music is often not intuitively mathematical graspable, you go ahead and you use these effects. And that's also...why people like tubes is because they're imperfect.
LLR (00:12:17):
Yes. I was thinking, for example, the other day I needed to read a temperature. It's in an analog voltage. And I wanted to convert it to use a simple amplifier, just to get a wider range. I mean, a larger signal value. And the thing is that, this sensor, that sends - I don't remember exactly, but let's say that it sends me 10 millivolts per degree, per Celsius degree.
EW (00:12:51):
[Affirmative].
LLR (00:12:51):
I wanted to convert it to two volts. So I need to amplify it a lot, and so I make an analog amplifier for it. The problem is that this analog amplifier can only output voltage that is...with a margin, like less than the positive power supply and higher than the negative power supply. Right. So the two rates, it cannot. If I feed the amplifier with 12 volts, I cannot get 40 volts out of it.
EW (00:13:26):
Not, not for very long. No.
LLR (00:13:30):
Yeah. Probably not for very long. The thing is that that filter will be like very, I mean, that amplifier will be very bad because I have very short range, but if I plug my guitar in it and I use exactly that amplifier, I'm going to sound like Jimi Hendrix. So from the engineering point of views, it's a bad design. But a musician definitely can do something interesting with it.
EW (00:13:59):
Why? Why are our brains...is it that our brains are wired to like the imperfections? Or is it that we are culturally wired to prefer these things?
CW (00:14:12):
I don't, yeah, I don't know the answer.
LLR (00:14:14):
Yeah. There is one thing that the distortion makes and it's easy to measure. When there is a little bit of distortion we're introducing like high frequency harmonics. For example, if we have a pure sine wave and we see the spectrum, you will only see like one bar, right?
EW (00:14:36):
[Affirmative]
LLR (00:14:36):
One point. But when you have a little bit of distortion into this waveform, you will see that that appeared in your frequency analyzer, they appear like small frequencies, higher frequencies. And the more you distort it, you get more of this and it just makes it sound more appealing.
EW (00:15:05):
So if you're going to make a model of an analog synth or an analog circuit, as part of a synthesizer, you're actually going to do this in digital..., do you just play one of the original and capture a spectrum and envelope, and then recreate it? Is this how you go about modeling it? Or is there a different way?
LLR (00:15:31):
No, a little bit different because if I, okay, I'm going to tell you a little bit for the process. So what I usually do is I take one of the existing filters. Like there are lots of schematics online that you can find. And let's say that I take one of those, and then I start analyzing what's the basic architecture of the filter. So if we take one, let's say, a Sallen-Key filter, which is also quite common.
EW (00:16:10):
Sallen-Key. Okay. It's a high-pass, low-pass filter?
LLR (00:16:16):
Yes. And many of these musical filters have a variation of this one, but in order to change the cut-off frequency in this architecture, in the Sallen-Key, you need to change two resistors. And in many of the synthesizers, since you don't want to put variable resistors there, you want to control them with voltage, there are different ways of doing that. For example, with diodes, in changing the biasing of the diodes, you can put transistors, effects, et cetera, in order to make, to simulate, well not to simulate, to have this valuable resistor. So what I start doing is I, you see, like this program that I write, the simulator that I write in Wolfram, I start making a component model of the circuit. So, I mean, you can think about it as just making a Spice simulation, where you put all the components, right?
CW (00:17:32):
Ah, okay.
EW (00:17:32):
Okay.
LLR (00:17:33):
The main problem with the Spice simulator is that the models maybe are too complex. If I put, like an operational amplifier and model, it will contain minimum 12 transistors. I'm simulating like these 12 transistors, but with the simulator that I use, I can replace that complex model by a simpler one. For example, in the case of the operational amplifier, the simplest model will be just measuring the voltage between the inputs and adding again.
LLR (00:18:11):
And then I can consider other effects. And this is where it starts getting interesting. So let's say that I make my model, I use resistors, capacitors and an operational amplifier. If I make this model of the operational amplifier linear, I will get a linear filter. And the linear filters...sound okay. But if I want to simulate the analog feeling of this, what I need to do is to make a model of the operational amplifier that behaves just like my real amplifier. Which will be - it has clipping.
CW (00:18:56):
Mm.
LLR (00:18:56):
It is exactly what I mentioned before. The amplify cannot output more voltage than its rails. Then I have to add like this limiter. And once I put this non-linear operational amplifier, my filter is going to be non-linear. And from this circuit, just by following the basic electric analysis methodology, like nodes, like getting all the grounds of the nodes. I can get a set of equations that I can start simplifying more and more until I get like the smallest representation that will be a set of nonlinear differential equations in that this will be the smaller representation of my model. And these are the equations that I need to simulate in order to imitate this specific filter.
CW (00:20:05):
So it's really a component level model that you start with, not a behavioral.
LLR (00:20:12):
Well, it depends. It's a mix.
CW (00:20:14):
[Affirmative].
LLR (00:20:14):
It's a mix...I usually start with the component level because at the component level, it's the one that allows me to analyze which things are the ones that I have to model. So let let's say that my filter has four operational amplifiers, maybe not all the four need to be non-linear. And in order to determine that I usually just do a comparison with the real circuit, which, I either made it in a VCV or have it in RedBoard, and then I input a signal and I check "Okay. So if I put my filter into this configuration, what's happening? Okay. This operation amplifier is saturating. This one, it always works in the linear region, then I can replace it for a smaller, for a simpler model. In the end, that's part of the analysis, trying to figure out what are the things that matter and what doesn't matter in order to get a simpler model.
EW (00:21:24):
Are there any popular filters that have just not fallen to this technique, that are just too complicated?
LLR (00:21:37):
I'm not sure. I think that at least I have been able of simulating, like reasonably well, most of them, but...the problem that I have had is that since I want to model this filter and then simulate it in a microcontroller, I don't have enough CPU to do that. And then I have to drop details of my models in order to be able of simulate it.
LLR (00:22:07):
So...my model will not sound exactly the same, but it will sound pretty close, and it will be much more efficient. To give you a specific example, I was modeling a filter found in a Russian synthesizer called the Polivoks. And they use this special operation in their amplifiers that, in which again send a voltage and that controls the bandwidth of the amplifier.
LLR (00:22:40):
And in order to model that, so first I had to get the operational amplifiers, then making the model. And the model that I got, it was too complicated. I mean the differential equations were very hard to solve. And then I had to simplify a bit my model in order to get something that sounds close, but it doesn't sound exactly as the original.
EW (00:23:11):
Have you trained yourself to hear the differences? And does that affect your enjoyment of music?
LLR (00:23:17):
Yeah...like the worst thing about working with something like this is that you cannot listen to music while working, 'cause I need to be listening to continuous waves, sounds for a long time, for long periods. And also I, they just get tired. So I also use a lot of frequency spectrum just to find out, I mean, to help me be sure of what I'm hearing. Yeah. That's what that's about. And also my wife gets very upset because playing a sound like twos hours waahhh, waahhh, waahhh...
CW (00:24:06):
We're sponsored this week by Qt or "cute". I'm super excited to have Qt as a sponsor. I've been a fan for years. I built entire medical devices using Qt as well as tons of small utilities for things like firmware update and microcontroller config. Qt is a cross-platform application framework based on C++. That means you get a full set of libraries for nearly everything you can think of, plus a world-class GUI that will give you a native look, wherever your code runs. You can target 16 different desktop, mobile, embedded operating systems. Write your code once and run it nearly everywhere with minimal modifications in one SDK, in one language you probably already know. It's awesome to take something that started on Windows, move it to Linux, macOS, even iOS in a matter of days, and have everything scale and appear native for the new target.
CW (00:24:51):
Qt's fast design and development workflow is trusted by over 1 million users worldwide and over 70 industries, including automotive, medical, automation, and aerospace. Version 6 has just been released and has a bunch of cool new features, including C++17 and Python support. It has a new graphics architecture that supports the latest GPU APIs. Developing adaptive, scalable user interfaces in Qt has gotten even easier. They're also announcing a brand new product that I'm super excited about. Qt for MCUs. I wish I had this two or three years ago. It is a tiny footprint version of the full-fledge framework. You've got the same rapid development tools and GUI designer now for a smartphone quality user interface on your embedded project. Since the tools are the same, you can prototype and test your UI without hardware, right on your desktop.
CW (00:25:37):
Qt for MCUs targets RTOS-based systems or bare metal on a variety of MCU families. I'm thrilled about this, and can't wait to try it. If you'd like to try Qt or Qt for MCU yourself, go to qt.io/embeddedfm and sign up for a free trial. It includes all the frameworks and development tools along with a bunch of cool demos of desktop, mobile, and MCU applications. Qt made my development life so much easier dozens of times. And I think it can do the same for you. Thanks again to Qt for sponsoring this week's show.
CW (00:26:14):
I wanted to ask, just briefly, before we move on, about the, you said you get it down to a minimal set of nonlinear differential equations, and those are not easy. There's, various techniques for solving nonlinear differential equations, but they aren't as approachable as linear differential equations. What methods do you use to solve them and how do you fit that into a micro?
LLR (00:26:39):
Yes, so I use, depending on the filter I have, I mean, one of the advantages that I think I have is that in my real work, I develop a simulator. And over the years, I have accumulated a lot of knowledge on how to simulate differential equations. In order to fit them in the microcontroller, I use a lot of things. For example...part of my analysis, once I have the differential equations, is to find a method that is suitable for the set, because if I want to simulate an RLC filter, made with one resistor and one capacitor, probably the Euler differential equation, the Euler method for solving differential equations will be enough. So I don't need to do more.
LLR (00:27:33):
And if not, for other filters, what I do is try all methods that I know, Heun method, Runge-Kutta, trapezoidal integration. And then I try to find which one fits better into these to solve the equations and to have like more or less accurate results. And then there comes another problem, which is performing the computations. And since I have nonlinear elements and I have to do iterations, I have a bag of trickery to write the code. For example, using a lot of lookup tables in my code to try to speed up the computations. If I have like a long formula that I need to evaluate and it's only one variable, it's worth making a lookup table for it. So, yeah, from all the equations, I tried to do that. And also, I tried to use mathematical simplification as much as possible in order to reduce the number of multiplications, which are usually the ones that tend to be more expensive.
CW (00:28:54):
It sounds like it's mostly time domain, though. When I was doing this stuff years ago, one of the techniques was to do, I can't even remember, do a Fourier transform of the nonlinear de and then do like Runge-Kutta in frequency space and then move back or something like that. But this sounds like all time domain.
LLR (00:29:14):
Yeah. I usually do the time domain because once I have the set, I mean, if I apply this technique and I make it efficient enough, it will work fine without doing a back and forth Fourier analysis.
CW (00:29:30):
That's cool.
EW (00:29:32):
Well, one of the problems with doing the Fourier analysis is you need a window to act upon. And as soon as you have a window to act upon, now, you're delaying your signal.
LLR (00:29:42):
Yes.
EW (00:29:43):
And that has all kinds of problems associated with music.
LLR (00:29:47):
Yes. So one of the big problems in the filters is that you usually have feedback paths -
CW (00:29:53):
Oh.
LLR (00:29:54):
- in the circuit. So if I split, let's think about one of the popular filters, the Moog Ladder filter, which is basically for low-pass stages. And then you have feedback. If you model this one as four separate low-pass stages, and then you take the output sample and you feed it back to the input. You get a filter that sounds different because you have a delay in the output and in order to avoid that I need to analyze the system, including the feedbacks. And that's usually why I need to solve the differential equations. Because I need to iterate, I need to have the iterative method in order to solve this feedback back path.
EW (00:30:53):
You've implemented a pretty long list of filters, but I don't know what they sound like. Are there ones that would, that I would recognize based on certain songs, or are there ones that are special, beyond those we've talked about?
LLR (00:31:09):
I don't know if you, I mean, not you, but a person that is not into the synthesizers will recognize them.
CW (00:31:17):
I can't even tell. I mean, I've got several and I kind of know what these things do. I couldn't tell you which filter was which in a song.
EW (00:31:25):
Okay. Then tell me about this one. Debriatus, a Wave Destructor with Bit Crusher, Wave Fold and Distortion.
LLR (00:31:37):
Yes. Yeah. That's not the filter, definitely but that's -
EW (00:31:40):
What is it?
LLR (00:31:40):
So that's another - in this module for Eurorack that I make, I included a lot of filters. So most of my filter models are implemented into this module. And it's possible to switch between them, just by the press of a button. And since I have a module that can do that, and I have a processor. So I decided to put some of my other virtual module into this one, into this real module. And that is basically a distortion, because you can always take your audio signal, spice it up a bit with bit-crushing saturation distortion, and then put the filter after it. To get it back to nice levels.
CW (00:32:33):
A bit crusher is, that's where you take the bit depth of a digital signal and chop it off, right?
LLR (00:32:39):
Right.
CW (00:32:39):
So if you have a 16 bit digital signal, you truncate it to eight and it sounds gross. And then you fix it up with a filter.
EW (00:32:51):
You mentioned Eurorack and modular synthesizers. Could you explain what that is for people who aren't surrounded by a lot of synthesizers?
CW (00:33:00):
I have no Eurorack in this house.
LLR (00:33:01):
Yeah. Okay. So -
EW (00:33:09):
Is that what you want for Christmas? Sorry, go ahead.
LLR (00:33:10):
Okay. So basically, the common synthesizers that people may have seen, they consist of a bunch of buttons in a keyboard form factor.
EW (00:33:23):
Yes. Yes. I see that.
LLR (00:33:24):
And those are...like fixed synthesizer...let's call them that. So integrate the synthesizers, in which the architecture of your sound, I mean, of your synthesizer, it's a little bit fixed. It follows a traditional path, which you have sound sources, filters, and then you have modifiers, and you can do a lot with those synthesizers. But there is this other kind of synthesizers, which are the modular and one of the most common form factors is this one, the Eurorack. And the main difference is that instead of when you buy, when you want to build a synthesizer, instead of buying one that has everything, you buy individual modules.
LLR (00:34:14):
So you can say, I want an oscillator. And I like this one from this manufacturer because it makes this and that. So you buy that one and you have an oscillator in your panel, the panel of your module has jacks, audio jacks. And then you can just take that audio signal and insert it into any other module that you want. So you can say, "I want a filter" and you buy like one, two, or like me, you end up having thirty-two filters, analog filters. So you pick one of those...and then you can send your signals. So the main difference that you have, like full freedom to construct your synthesis machine and to rewire it with physical cables.
EW (00:35:08):
Like if you were to play this live, would you be rewiring it live? Or is it, you wire it, and then that's how it sounds for however long the song takes.
LLR (00:35:17):
It depends on the artist. I have seen a lot of videos of people that actually play live with modular synthesizers, and they usually have it a little bit fixed. 'Cause I mean, when you have a synthesizer that is that big with so many cables, it may be hard to rewire live, but you can always do a little bit of patching. So probably changing a few of the jacks that...you remember. Because...I have a small Eurorack synthesizer and I have probably 60 patch cables. And when I'm designing some sound, sometimes I use them all and I'm not doing anything complicated, like a live performance. So I cannot imagine a live musician, patching 60 cables in a single session.
CW (00:36:21):
Most of the time they're probably twiddling the knobs and things that affect individual modules and not rerouting things.
LLR (00:36:28):
Yes. And there are even modules that help you do that. If you don't want to change cables live, you can add a module with switches. Then you can just toggle switches to change your outs.
CW (00:36:45):
Usually what happens to me, even when I'm using a non-modular synthesizer is, I'll start playing with it and then not make any music because I'll be so busy, changing, routing and getting so into that part of it, like, "Oh, I'm designing this sound. Oh, what if I connect this to that? Oh, what if I turn this knob?" And then three hours have gone by and I haven't actually, I mean, it's fun.
LLR (00:37:07):
Yes, I agree that's what I usually do. I never, so I just get lost into the sound design so bad that I haven't made any songs in four years or something, three or four years? Just sounds.
EW (00:37:26):
But you have been making this unit that can go into the Eurorack that then can model, simulate, mimic, a whole bunch of other different modules that would normally go in the Eurorack.
LLR (00:37:42):
Yes. Yes. So that's, I mean, before we have been talking a lot about filters and...I do all these models because I want to put them into this module, which has an ARM microcontroller and the idea behind that, it was to have this detail module, that, just like the robots that I mentioned before, it tries to deceive you and manipulate you to think that it is an analog filter, but it is not, it's digital. But has the advantages, that you don't have to use a lot of space with a lot of filters. And maybe if you're just picky like me, you want a specific sound and then you can easily just change which filter you want for this specific sound.
EW (00:38:37):
Well then you don't need to change the patch cables. You can just change which filter you want.
LLR (00:38:42):
Yeah. I mean, you can change the filter, but you can still change a lot of things in your sound system, in your sound generator.
EW (00:38:53):
There's a completely simulated Eurorack called VCV Rack? Is that right?
LLR (00:38:59):
Yes.
EW (00:38:59):
Okay.
LLR (00:39:00):
So VCV Rack is this open source, Eurorack simulator, which, it was developed by a person called Andrew Belt. And since it is open source, it is very easy to create your own modules. And it simulates the Eurorack because it follows the same form factor. You have also like traditional Eurorack panels, and then you can block, do all the patching with wires.
LLR (00:39:33):
And it...tries to simulate all except the bad things. For example, you cannot connect the outputs to outputs or it doesn't let you input to input. And yeah, it is really nice. So I actually started developing all this for VCV Rack. Since I've been developing modules for a long time, when I found that VCV Rack was going to be launched..., my first module was this virtual filter for VCV Rack.
LLR (00:40:13):
And I got excited, and I kept doing more and more and more, and what I did, it was once I had all these modules, up to that point, I started thinking maybe I should make an Eurorack module with this. And then I ported back everything to hardware. And what I'm doing now is backporting again. So I want to do like a clone of my own Eurorack module, but for VCV Rack, so that we can simulate all the parts, like even the LCD screens and stuff like that. So, yeah, it is very approachable, VCV Rack because Eurorack is expensive.
EW (00:40:59):
I think it's funny that you started modeling hardware with software.
LLR (00:41:05):
Yeah...that's what I mentioned, that I cannot decide if I like software or hardware the most, because I'm always like blurring the lines of, I mean, it's difficult to know what is hardware and what is software, in this sense, because I have software that simulates analog and hardware that simulates digital.
EW (00:41:31):
Shifting gears a little bit, you have also made a language to help you with the signal processing pieces. Can you tell me about that?
LLR (00:41:43):
Yes, I have the Vult language, which is a very simple language that I developed...to gain kind of portability because some years ago, when I was doing all this, synthesizers using different microcontrollers, I would make some piece of code, for example, for DSP. And when I have had something done...and a new development board comes out and I want it, and then I develop something for it. And I started getting problems like moving my code.
LLR (00:42:22):
So I decided to make like this layer on top of C++, that will be a much simpler language, less things to worry about. And that I could later create different generators to target the different platforms that I wanted to do. So for example, I will make with this Vult language an oscillator, and then I could, if I wanted to run it on a web browser, I can generate JavaScript and then use the web API to test the sound or even develop it directly in the web browser.
LLR (00:43:06):
And once I'm satisfied with it, I can generate the C++ code that I want to run, for example, on a Teensy board or probably in an Arduino, if it's possible, without making changes. So I wanted to concentrate on all my development into this language and be able to move in it, but also, adding features that I always use.
LLR (00:43:34):
For example, creation of lookup tables. In other projects where I have done with lookup tables I use, like, any external software to do the calculations and then write a file and then do anything into C++. But with the Vult language, I can just write a formula and say like, this formula is going to be a lookup table of this size. I'm going to use interpolation, et cetera, and when compiling the code, it does all the calculations and I get the optimized code, and I also have features or more restrictions in the language.
LLR (00:44:16):
For example, when converting integers in floating point values, one error that I hit a lot before it was in C++. When you write a number one, it can be an integer, it can be a double, it can be a floating point. I mean, double-precision or single-precision, and if you write the incorrect one, the C compiler will assume which one...is right.
LLR (00:44:48):
So it can be that you wanted to use single-precision, but since you didn't write the F after the number, it thinks it's double-precision, and then just, your compile code will be making double-precision calculations and then convert it back to single-precision. And I found that that was actually consuming time. So in the language, I also put like very strict restrictions on conversional types. Like everything needs to be explicit and yeah, a lot of small things like that, I cover with the Vult compiler. And that's what I use for my development.
EW (00:45:28):
Are other people using the language or is it just for you?
LLR (00:45:32):
I mean, I'm developing just for me. And there are few persons that use it and a lot more trying to use it, but since I don't have very good documentation on it, maybe that's a key factor. But at least the persons that I know that use it, they like it a lot.
EW (00:45:53):
And do you, would I have to already have solved all of my equations and this is just putting in the forward methodology or will it also help me figure out the equations, the differential equation solutions?
LLR (00:46:09):
No. For the differential equations, so the Vult language is just for the final implementation.
EW (00:46:15):
Okay.
LLR (00:46:16):
And for doing all the differential equations, as I mentioned before, I work in Wolfram research, and I work in the group that develops Wolfram system model, which is a tool for modeling systems, and that's the software I use where I do all my work with the electrical components. And it's a graphical interface, right? So I can recreate my whole circuit, with that tool. And then I use Mathematica to extract the equations, and in Mathematica is where I do all the manipulation and all that, and all the testing and prototyping at the equation level, because it has a lot of tools for doing that.
EW (00:47:04):
That's mostly symbolic manipulation at that point? Or is it already numeric?
LLR (00:47:09):
No, no, it's completely symbolic.
EW (00:47:10):
Okay.
LLR (00:47:10):
Because I want to get, I mean, I want to really use the symbolic capabilities to eliminate as much as possible of the calculations.
CW (00:47:21):
You say you don't have good documentation, but I'm looking at the "Easy DSP with Vult" page. And it walks through the whole thing. It starts with a filter and shows how to write a function that matches that filter. And then, so yeah, I mean, obviously it's not going to do the hard part for you, doing the math, but this starts with that a little bit and goes through it.
LLR (00:47:41):
Yes, yes. Yeah. I mean, I don't have the documentation for all the cool features.
CW (00:47:47):
Yeah, yeah.
LLR (00:47:47):
For the new features that I have implemented.
EW (00:47:51):
Does it integrate with the VCV modular synthesizer? VCV Rack?
LLR (00:48:00):
It integrates through a plug-in. So in VCV Rack, there is a plug-in that is called a VCV prototype and that module allows you to write in a single file code and then test it. And I use that one a lot as well for my development because, so I actually contributed that part of the plug-in, being able to front end Vult code in real time. So when I'm developing, I open this module and then I have my text file with my code and I type it. And when you save, you can save, it immediately starts running in VCV Rack and it is really, really useful that you can just test, prototype and test the code.
CW (00:48:47):
Really cool. Look, I don't need any more hobbies, Leonardo. This is very awesome stuff.
LLR (00:48:54):
I will send you some links later.
EW (00:48:55):
If someone hypothetically wanting to get started with this with, with creating the models and maybe not having a PhD in this sort of thing, is there a way to get started?
LLR (00:49:15):
Hmm. Yes. I think that, I mean the best, it totally depends on what you want to achieve because you can do, modules that are nice. That sound nice, that are useful, and that do not require all the kind of detail that I put behind. I just do it because I know how to do it.
LLR (00:49:42):
And definitely the best way to start this, is for example good VCV Rack and I actually have some tutorials. I recorded like a three hour tutorial on YouTube, on my YouTube channel, where I show the whole process that I follow. And so I show the modeling part and also the implementation. That will be one of the resources that that you can use, but definitely you, a person does not need to go into the modeling in order to do something interesting. And that sounds nice. If you want to go into the modeling, probably it will work. The VCV, all the theory behind electrics circuits and simulation methods, et cetera.
EW (00:50:33):
I have a listener question from Emily: "As someone who's always interested in modular synths, but put off by the price, do you have any advice for how to get started on a budget?" Is this going to be VCV Rack?
LLR (00:50:46):
Yes. I think that the best way of doing that will be using a hybrid setup. So you can install VCV Rack for free, and then you can get a module that converts, that helps you bridge the modular side. And also, I mean, the Eurorack and the virtual Eurorack. And...there are different modules, for example, the Expert Sleepers modules, which, they are basically a sound card with many channels. And that helps you combine virtual modules with European modules.
LLR (00:51:30):
So if you have like, on the virtual side, you can have a thousand modules, then you have your bridge, and a few Eurorack modules, just for the sound design, or also...the Eurorack has the advantage...that you can touch the controls.
CW (00:51:53):
Right.
LLR (00:51:53):
And that helps you a lot when performing and...when experimenting, so the modules that you think that are good for Eurorack, to have on the Eurorack side, you can pick those and use them there. So definitely, just going back, because a hybrid setup of something virtual and analog, sorry, something virtual, something real, in the bridge, it will be a good starting point.
EW (00:52:28):
Are the VCV Rack modules expensive?
LLR (00:52:34):
No, no, no. The software is open source. There are like almost 2000 free modules. I have myself, I have probably 20 something modules, free. And there are a few that are paid modules and those tend to be cheap, probably less than $20.
EW (00:52:59):
So if I wanted to try modular synths for really cheap, I could do all virtual.
LLR (00:53:05):
Yes.
EW (00:53:05):
And then slowly build up into physical world as well.
LLR (00:53:09):
Yes. And that will help you a lot to decide what you want in the real world.
EW (00:53:17):
I have one more question that I think is just going to be the end of the show and that is, I hear you're coming out with something to do with drums. And I didn't tell Christopher that ahead of time.
LLR (00:53:28):
Oh yeah. So the last months I've been modeling drums, analog drums.
EW (00:53:36):
I mean, you just go bap, bap, bap, right?
CW (00:53:37):
Why are you like this?
LLR (00:53:40):
Yes, something like that.
CW (00:53:41):
Boom, bap, bap, bap, psss. Psss, psss.
LLR (00:53:47):
And these drum models are kind of fascinating, how the original designer of the circuits developed them. So if one thing's, "How can I make like a cymbal sound? Oh yeah. I just need a noise source, envelopes, et cetera", or a clap sound. That was super interesting for me.
CW (00:54:12):
Yeah.
LLR (00:54:12):
Because how one would make a clap sound...and while I was analyzing those modules, I found out like a lot of interesting things that if you don't model something, if you don't model the behavior or the transition curve of this transistor, you don't get a good sound. So yeah, I'm developing, there are modules and probably in the future, I will make another hardware module containing those.
EW (00:54:46):
Wait a minute, go back. If you don't model the what part of the drum sound?
LLR (00:54:51):
The transition. I mean the non-linearity of the transistor.
EW (00:54:56):
Oh, okay.
CW (00:54:57):
Yeah. 'Cause drum sounds are weird. They're, I mean -
EW (00:54:59):
It's super impulsive.
CW (00:55:00):
Synth sounds you can get your head around because, "Okay, I've got a sine wave or a triangle wave, and then I do this and that, but drums are a very complex mix of noise intransience and also pure tones and resonances.
EW (00:55:16):
Oh yeah.
LLR (00:55:16):
Yes.
CW (00:55:16):
Like for a bass drum. Bass drum is a very low, pure tone combined with the slap and all that stuff. So I've always been kind of fascinated by the analog drum synthesizers, because it's like, "Okay, they have to kind of purpose-build a bunch of circuits." It's not just one synthesizer that you can kind of make anything with. It's like, "Oh, I have to have something over here to work, to make a snare, but it doesn't really work for the kick drum. So I had to have some extra stuff for that." Is that right? They kind of have to be a little bit purpose-built?
LLR (00:55:45):
Yes. For example, the case that I mentioned before with the clap, you have like some noise source and then you have envelopes to do the triggers. But in order to, when I was trying to model it, there is a part in which you see a VCA, it's basically a multiplication. And if you do a basic multiplication of the voltages, you don't get the sound. So it needs to be like this specific, transistor-based VCA. That way you get the correct transient when changing. Yeah. When modulating the amplitude. And only if you use this VCA, you will get something that sounds like the original.
CW (00:56:37):
Yeah.
EW (00:56:39):
Well, wouldn't it be simpler to model the original sound? I mean, model the handclap instead of model the transistor electronics that models the handclap?
CW (00:56:52):
How do you model a handclap?
LLR (00:56:55):
I mean, you can always record the hit, like people clapping, use a sample. That's the simplest model probably.
CW (00:57:03):
I mean, there are, physical modeling synthesis.
LLR (00:57:07):
Yes.
CW (00:57:07):
Like where you, there's piano models. I have one that that's quite good and it's totally fake and they've gone through and figured out how pianos work and model it that way. But that's more of a simple thing with, you have a hammer and then the pure tone from the strike. I don't know how you'd model something as weird as just an impact with skin. That's interesting.
LLR (00:57:30):
One interesting thing about like the drums is that the, when for example, the eight Oh eight, the Roland 808 drum machine, the designers were trying to do something that sounded like drums. It ended up not sounding like a real drum kit, but the fact that that sound was different and unique, it made it super influential in the eighties music.
CW (00:57:58):
Yeah. That's probably like the most famous, when people think of analog drums, that's the synthesizer I think people gravitate to thinking about.
LLR (00:58:07):
Yes.
EW (00:58:09):
I'm still boggled by you going through electronics because it seems like the physical model would be simpler. And the electronics people had to have done the physical model at some point.
CW (00:58:20):
Well but, correct me if I'm wrong, you're following on from the analog drum sets of the past. So you're trying to recreate kind of that sound, not, you're not out there to make realistic drum sounds in an analog synthesizer.
LLR (00:58:35):
Yes. Well, what I'm actually interested is in, to learning how these drums work.
CW (00:58:41):
Yeah.
LLR (00:58:41):
And in my learning process, I end up recreating them and then I can just publish them for other people to use.
CW (00:58:52):
Yeah. If you want to make realistic drum sounds you sample 'em.
LLR (00:58:54):
Yes.
EW (00:58:58):
I have a question from Tom: "Are the virtual synths really a good try-before-you-buy sort of representation or are there important differences between hardware and software implementations?"
LLR (00:59:15):
Okay. So actually this question, I didn't understand very well if it's like in general or for my specific -
CW (00:59:23):
I think in general.
EW (00:59:24):
Yeah. In general. I mean, I'm hoping that for your specific, you'll say, "Oh no, they're rock solid, right on." But in general, are there things people should look for that indicate that differences are likely, or is this just a matter of reading the reviews?
LLR (00:59:42):
I actually don't, I don't know how to answer this one because, as we were talking before almost it's like, everything works as long as you like the sound, everything works.
CW (00:59:56):
Yeah. That's the thing about synths is that you can get into a battle about virtual versus real, and it's like, if it sounds good, who cares?
LLR (01:00:05):
Yeah, I think that the main difference will be just the interface, the fact that you can touch it, this is what makes it different.
CW (01:00:13):
And that's a pretty big deal.
LLR (01:00:15):
And, I mean, if we reroute the question to my own stuff, in my personal case, the stuff that I have for free, it wasn't intended as try-before-you-buy. But what ended up happening is that a lot of the people that tried my stuff, they tell me, yeah, I really like the filters. That's why I decided to get the hardware.
EW (01:00:51):
Cool. And we've talked a lot about all of this, and you've mentioned your job...and you've mentioned using the simulators that are part of your job to fuel your side gig, fuel Vult. How do you lead this double life? How do you have time to lead this double life?
LLR (01:01:20):
It's complicated. So I, in Vult I usually work just one hour, two hours during the night and during the weekends. But one nice thing is that since I'm using the tools that I'm developing for Wolfram, I usually come up with things that okay, maybe, it will be something like probably I found a bug because I'm using our own tools, or I found something that needs to, that could be improved.
LLR (01:01:54):
And at the same time, I have published to Wolfram some of the work that I do with Vult...I publish a blog post about how I use the tools to do that. So I think that we have a kind of, a nice symbiotic relation in which my, this, that is my hobby, helps me with the work that I do at Wolfram. Also in the development of the compiler, there are things that I try in my Vult language, experiments that I do. And then sometimes I say like, "Oh, I really like the way that I made this part of the code, probably we could apply it to the product" to do better code or something like that.
EW (01:02:51):
Leonardo, do you have any thoughts you'd like to leave us with?
LLR (01:02:57):
Yes. I was remembering the phrase from a famous statistician, that said that "all models are fake, but some models are useful" and that's kind of the mantra of my life.
EW (01:03:14):
Our guest has been Leonardo Laguna Ruiz, founder of Vult. Check out his site, vult-dsp.com. We'll have links to that in the show notes, as well as to his wonderful YouTube tutorials.
CW (01:03:29):
Thanks, Leonardo.
LLR (01:03:30):
Thank you. It was my pleasure.
EW (01:03:32):
Thank you to Christopher for producing and co-hosting. Thank you to Leonardo again, for suggesting he be on the show, and in the subject saying he wasn't a robot. And of course, thank you for listening. You can always contact us at show@embedded.fm or the contact link on Embedded FM. A.
EW (01:03:53):
nd now I have a quote to leave you with from E.O. Wilson. Which, I always think of him as the ant guy, 'cause that was the first nonfiction book I read that I enjoyed. "We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely."