242: The Cilantro of Robots

Transcript for Embedded 242: The Cilantro of Robots with Christine Sunu, Christopher White, and Elecia White.

EW (00:06):

Welcome to Embedded I am Elecia White, alongside Christopher White. Our guest this week is Christine Sunu. We are going to talk about robots and their feelings, I think.

CW (00:20):

Hi Christine. Welcome to the show.

CS (00:22):

Hey. Lovely to be here.

EW (00:23):

Could you tell us a bit about yourself?

CS (00:27):

Absolutely. So I currently do rapid prototyping and product for hardware and emotive technology at Flash Bang Product Development. I also post tutorials on electronics and design at hackpretty.com. I previously ran the developer community at Particle and I worked on these weird open source robots at Buzzfeed's Open Lab for Journalism, Technology, and the Arts. So most of my work focuses on emotive interactivity in physical and digital objects. That means I work on getting people to feel living emotions about non-alive things, and that might be an app or a hardware product or a robot pet. In other words, I work on emotive tech, technology that accounts for our humanity and makes us feels things.

EW (01:07):

Okay. That really does lead to so many questions. Before we get into how and why and what, and again, how and why, we want to do a lightning round where we ask you short questions and we want short answers, and if we are behaving ourselves, we won't ask you how and why there. We'll see how that goes.

CS (01:28):

Cool.

EW (01:30):

Favorite skeletal structure?

CS (01:33):

Ooh. I like the bones in the wrist. It's a lot more bones than you would expect and they all have very unique names.

CW (01:41):

Hacking, making, tinkering, engineering, teaching, or programming? That got longer than before.

CS (01:46):

Oh, oh my god. You need all of those. Do you mean what I like to do? It depends on my mood. I feel like you need to tinker to hack, you need to make to do enclosures, you need to engineer to make it really good. There's so many things. And then, of course, teaching is just something that is fun and everybody should try.

EW (02:09):

Did you have a Tamagotchi?

CS (02:11):

Oh no, I didn't. I wasn't allowed to have a Tamagotchi. I did later as an adult. I had trouble finding my keys and I thought if they beeped every once in a while it would be easier. It was actually really sad, I then dropped the Tamagotchi on the sidewalk and killed it permanently, and felt emotions. It was really weird. It actually permanently died.

CW (02:35):

Well, the follow-up question is no longer operative then.

EW (02:38):

What? favorite fictional-

CW (02:39):

If so, how long did it live?

CS (02:43):

Okay, so that Tamagotchi lived continuously for, I think I probably had it on the key chain for at least four months, I'm a little clumsy, I drop my keys quite frequently. But yeah, I think it lived a pretty good amount of time. I mean, that's not really a good excuse, because I'm an adult, so technically, if I was putting my mind to it, I should be able to sustain the robotic creature continuously.

CW (03:10):

What's your favorite fictional robot then?

CS (03:13):

Oh, have you read Accelerando?

EW (03:19):

Yeah, by Chris Stross or Charles Stross.

CS (03:21):

Yep, yep. Yeah, so I like the Aineko.

CW (03:25):

I didn't read it. Do you remember?

CS (03:29):

So he has a robot cat and he updates the robot cat continuously and then there's this whole complicated subplot around the robot cat. But yeah, I really like that book, too.

CW (03:42):

Maybe I did read it.

EW (03:42):

What about non-fictional favorite robot?

CS (03:48):

Let me think. God, there's so many. I do really like the Keepon. The Keepon just makes me laugh all the time. It's hard for me to not laugh every time I see it. Have you seen this? It looks like two kind of yellowy balls, that sit one on top of the other and it has a little face and it was designed to interact with kids and to help kids, I think, interact with music more emotionally, and so you have the robot that just basically dances. It's only purpose is it dances. It's extremely joyful, it's really cute. I think it's well designed and I think it's a good simple design. And yeah, sorry that was a longer answer than I think I was supposed to give.

EW (04:35):

That's fine. Do you want to go next Christopher?

CW (04:39):

I was looking at the Keepon.

CS (04:42):

Right. You should watch the video where they take it through the city. It's great.

CW (04:47):

What's a tip you think everyone should know?

CS (04:51):

Understand the pop culture context for the product you're building. When this comes to robots, it often means understanding how whoever you're designing for, how that audience frequently perceives robots.

EW (05:04):

I'm going to stop with lightning round questions, because I want to dig into that. So often engineers are superior about their rising above pop culture, and you're saying that is important.

CS (05:22):

Well, I think that, to me, at the end of the day, it comes down to usability. Right? You want somebody to use your product. So if you want the most people possible to use and love and accept the thing that you built, then you have to understand what's driving them and what they are likely to enjoy and if you have to understand it so that you don't accidentally put barriers that make them not want to interact with your thing. So for a lot of the sociable robot kind of stuff, a lot of that is understanding the context in which we perceive robots largely, in which the media portrays robots. What does a friendly robot look like and what does an unfriendly robot look like? And I think that also differs very greatly by what culture you're in and what part of the world you're in, so it can get kind of complicated.

CW (06:16):

Yeah, because, just for myself, if somebody made a robot look like R2D2, that would automatically be friendly-

EW (06:24):

Check all the boxes.

CW (06:25):

... and if somebody made a robot that looked like HAL with the red light, I would immediately be intimidated.

CS (06:31):

Exactly. There's a reason why the Amazon Echo has a blue circle instead of a red circle. You don't want it to just look like a giant red eyeball staring at you. Right? And one of the cool things is that frequently when people have created robots in movies and television and comics, they have created them with a lot of things in mind to code them to be friendly or unfriendly. And, as designers, we can work with that. Not only has it been imprinted on the audience that we're frequently trying to get to use our products, but it also has already accounted for some of the problems that we're trying to solve. How do you make a robot look more friendly? Well, in the case of R2D2, you make him round, you make him babble like a baby, you don't give him any large claws. You might put one eye, but it's not clearly a scary eye. So there's been a lot of work that people do on individual robots in movies to make them appealing or unappealing.

EW (07:27):

Okay. Before we dig in further, I want to talk to you about some of your projects, so we have some concrete examples to talk about. Do you have any favorite projects?

CS (07:38):

Oh, I have a soft spot for the Starfish Cat.

EW (07:42):

Okay, so this was on Buzzfeed. And it was fuzzy on top and looked sort of starfishy and was a little frightening from there. I'm sorry, I know you want it to be emotive, but it was a little frightening.

CS (07:56):

Oh, that's on purpose. So what I called the Starfish Cat was the Starfish Cat emotional discomfort experiment.

EW (08:02):

I see. It works.

CS (08:11):

When I started at Buzzfeed, I had a fellowship at Buzzfeed through GE. They paid me to be there for a year to hang out and build these internet connected devices. And so, I don't know if they thought I was going to build a connected microwave or something, but I didn't. I built these different interfaces for connectivity and different interfaces for tech to experiment with how people saw and reacted to exteriors and exteriors when they integrated motion and things that felt automatic. So the Starfish Cat was an experiment in when you take different emotive cues and mix them in a way that might make people very uncomfortable. I frequently see products that do this by accident, so you will see, for example, my favorite was... they made a Furby in, I think, 2011 or something.

CS (09:07):

It was one of the newer ones that they tried to make, and it had these terrifying glowing eyes, because they used these, I think OLED screens, for the eyes. And it makes sense from a parts perspective, they wanted the eyes to animate, but, I mean, it had the effect of when you turn the lights out, you just have this thing staring at you in the dark with these terrifying glowing eyes, which mixes cues with this thing that is otherwise quite cute and fuzzy. So I wanted to kind of examine what happens when you push that further and also have this example of, look, this is what happens when you mix cues. The way you feel, that's also the way you feel when you run into these other things.

CS (09:47):

So I ended up making the Starfish Cat. The idea was to have the top be this adorable, fluffy, sweet looking kitty shape, so it has this head and these cute little ears that are through, I think, soft PLA, so they're sort of flexy and you can touch them. And he has closed eyes, looks very serene. When you walk by it, it starts to meow pitifully and knead these little claws. It's strange, because then when you pick it up, you realize that it doesn't just have two claws in the front, it has five claws that go all the way around five points on this terrifying, weird starfish bottom that's rubbery. There are five IR thermometers on the bottom of it that seek heat, so when it's closer to heat, it needs the two little claws that are the closest to the warmest spot.

CS (10:39):

If you actually pick it up and hold it to yourself, which people do, it will knead in the direction of your body heat, which means that it's trying to basically get to the warmest spot, which frequently is bare skin. When it senses that most of the sensors are on bare skin, it starts to suckle you with a weird pneumatic motor that I got from China. It's maxing out on a lot of the weirdness and discomfort and ambiguous signals. So the ambiguous signal of a thing that has its mouth on you. People think it's really cute when I dog licks them, but they think it's really terrifying if, I don't know, a lion that looks hungry and is salivating licks them, so it's very mixing cues.

EW (11:26):

How is this the second show where we talk about robots licking you?

CW (11:32):

Do you know Sarah Petkus or are aware of her?

EW (11:35):

Because you guys would be good friends.

CS (11:37):

No, I don't. I would love to talk to that person.

CW (11:40):

She's making robots that can taste people.

CS (11:44):

That sounds cool. That's a really hard problem, actually. I was watching a documentary about dogs last night, and I could go on about the comparison between dogs and dog robots for a long time, but there's a lot of things that dogs have, sensors that dogs have, that are just so difficult to build into a dog robot. The olfactory abilities that they have are just so far beyond what we would normally be able to do. And then, on top of that, there's the opposite thing that also happens where we build things into robots that the dogs don't actually have. So, apparently, dogs are really bad with spatial memory. If you put them in an eight-arm radial maze, they can't figure it out. There's weird stuff like that, where I'm like, "Okay, it sounds like some of the things that we build into robot dogs are overkill," which is an opinion I already hold. But it's overkill even if you compared it to a real dog.

EW (12:52):

Okay, moving on from incredibly creepy Starfish Cat.

CW (12:56):

I think it's cute.

EW (12:58):

Yeah.

CS (12:58):

See, that's the point. The comments on that Buzzfeed article was my favorite thing that has ever happened. It was mix of people being like, "Oh, it's cute. Can I adopt it?" And then people being like, "This is the creepiest thing I have ever seen." And a couple people saying, "Is this real?"

CW (13:12):

It's like the cilantro of robots.

CS (13:14):

Yes.

EW (13:18):

Okay. Tell me about Fur Worm.

CS (13:21):

Oh, the Fur Worm. People keep asking me if I named the Fur Worm and I don't think I ever did. So I was going to do this talk, it was about the minimums that you need for the perception of life. So how you make somebody think that something's alive while doing as little work as possible.

CW (13:42):

What's the minimum alive...

EW (13:48):

[crosstalk 00:13:48] Well, that's like a heartbeat. I remember working on toys and we would talk about putting heartbeats into the infant toys, because babies really like a constant, "thump, thump, thump, thump."

CW (13:59):

Sure.

CS (14:00):

Absolutely. And I think for really young kids that don't have babies, that probably works super well. The thesis of this talk was really that if you make is appear that it needs stuff, people will automatically fill in the blanks and say that's probably alive. Because a robot doesn't need things, but a living thing does. So if you give it enough consistency in its reactions to certain stimulus, but also vary it with enough randomness that some of its actions feel emotionally random, then you actually end up with an interface through very little work, very little code, where people just go, "Oh my god, that must be alive." So I built the Fur Worm as an example of that. It's three servos, it's really simple. And when you squish it, it squirms. That's it. If you squish it for longer, it squirms harder. The degree of its squirming gets more, but it's not consistently more. It's using random Perlin noise to vary the reactions.

CS (15:00):

So it ended up being realistic enough that as I was building it, it made me kind of cringey. I would squeeze it and then I would start to feel bad. So I built this for the talk, I held it on stage the whole time. At intermission, before I did the talk, it walked around and I showed people the worm and I started to feel really bad, because they were saying things like, "Oh my god, it reminds me of my dog. It's really cute." And I didn't think it would elicit that strong of a reaction, because at the end of the talk, the plan, which I did do, was to break the worm in half, to show that you really actually were bonded to this thing that I just explained to you exactly how it works, but it doesn't matter, because your instinct as a human, for empathy, is so much stronger than your logical mind.

CS (15:49):

So yeah, I broke it in half on stage and it's funny, because in the talk there's this applause at the end that I don't remember. I remember walking off the stage to stunned silence. There was a woman who came up to me afterward who introduced herself to me and said, "I'm a cognitive scientist and I knew everything that you were doing and I still cried when you broke the worm in half." So this technology is really powerful. I mean, you're talking about a couple of lines of code that are quite simple, essentially, but which give people such a deep reaction and such a deep instinct that the thing you're holding might be alive, and that is has to do with being human.

EW (16:33):

I totally understand this, because you put googly eyes on something, and suddenly it is friendly.

CS (16:40):

Mm-hmm (affirmative).

EW (16:43):

And even just talking about this robot, which I knew was a robot, you said you broke it in half and I'm ready to cry.

CS (16:51):

Yeah, it's rough. I almost didn't do it. I had planned the talk for weeks and I didn't know that people were going to react that strongly. I almost didn't do it when I was standing up there. Afterward, I kind of backpedaled, because I finished the talk, but then I was still up on the stage and I was like, "It's okay. It's not dead. I mean, it was never alive, but it's also not dead."

EW (17:14):

I'm not sure you're making it better there.

CW (17:20):

Doesn't this suggest that, I don't know where I'm headed with this, but it just scares me a little bit that we're that easily manipulated.

CS (17:30):

Absolutely.

CW (17:30):

Where does this go?

CS (17:32):

I mean, it is very alarming. Right? Because when you form a relationship with a robot or with an interface and you feel for it, as though it's alive, you think that when you're doing something, you're doing what that object wants you to do, what that thing that you perceive as a living creature, wants you do to, what it needs to survive. But what you're actually doing is whatever the person who created it wants you to do, whatever they think you should do, and whatever they predicted that you will probably do, once you interact with it. So it's kind of ushering in this interesting, dark new era of emotive technology, where hypothetically, somebody could use it for great evil, or simply carelessly. Even if you were using emotive tech and you weren't trying to harm people, you could still very easily do so by not understanding the full implications of the thing you were building or trying to build something very well for purpose A and then it turns out to be disastrous for purpose B.

CS (18:43):

I had talked about, in that talk, I think, the hypothetical situation in which somebody builds a friendly factory robot, a robot that is friendly to its coworkers, so they're more likely to accept it, it increases team bonding and camaraderie. But what do you do when it malfunctions on the line and you need to pull the plug? In the 10 seconds, two seconds, 500 milliseconds that it takes you to realign your perspective and say, "No, it's a robot. It's okay for me to pull the plug." Is somebody who is actually alive going to be hurt? So there's a lot of interesting things that come up here, because we are really affected by the perception that something might be alive, and we're strongly affected by it at very small minimums.

CS (19:37):

One of my favorite Reddit threads of all time is about people zoomorphizing objects and people start talking about their Roombas. And people bring up the fact that the Roomba, it doesn't really seem alive in any way, but people still react to it in a way that's like, "Oh, I need to help it when it gets stuck." People will pay more money to repair a Roomba that they own, even though it costs less to buy a new one. My favorite story was a guy was saying that he used to jokingly, to his friends, refer to the Roomba as his son. So when it would mess up the person would say, "Oh, I'm so disappointed in him."

CS (20:23):

But then, the Roomba broke and the person felt so bad they he or she paid all this money to have the robot repaired. Because even as a joke, you make a joke about your relationship with the robot, you say, "I'm playing this game." But not all of you is playing. Some part of you really starts to act towards the object as though it's alive, as though you have an existing relationship to it, as though you are its caretaker and it relies on you.

CW (20:55):

Something you said, I just want to talk about for a second, you said the minimal set of things to kind of relate to. It seems like it's easier to make an artificial construct that's relatable and lovable by a large group of people, by choosing the right things, the right attributes. It's easier to make that something to bond with than, perhaps, other people? You see where I'm going with that?

CS (21:29):

See this is the danger.

EW (21:30):

Oh yeah.

CS (21:31):

Yes, absolutely. This is one of the huge dangers, that you might be able to create interfaces where people would prefer the interface to another human. I mean, this is something that we already see with cell phones. I guess cell phone is kind of the older millennial term for it, this is what we already see with phones. It's something where if you get enough stimulus from... people have started talking about this more, that the amount of stimulus you get from your phone and the predictability of conversation is not only more interesting to a lot of people at some kind of weird basic level than another person, but it also changes the way that you do interactions in person with real humans.

CS (22:14):

So yeah, we build our tools and our tools shape us, this is a real strange thing that's happening now. Now that we have that ability to really create interfaces that are more powerful, more emotionally powerful. And, wherein, we might be able to automate one side of the conversation.

EW (22:34):

And yet, we're going to have to remember that if you have to choose between the cute, fuzzy robot and the guy you don't like, one of them is alive and the other isn't.

CS (22:50):

Absolutely. It's a very strange field. This is also one of the reasons why I'm a large advocate for interfaces that... for emotive tech, I think it is better for people were we to build more interfaces that rather than connecting us to an automated space, it connected us to ourselves or each other. So the reaction of the object, you're not grounding the object in... it is a robot in of itself. You're grounding the object as it's reacting to my actions, it's something that I work with and that I work on. This is something that's very clearly having to do with me versus it's reacting fully on its own and it's an automated secondary being. So one example, a very simplified example of this, because I realize that that starts to sound very confusing, is the robotic fridge cat that I built when I was doing Everyday Bots on The Verge.

CS (23:57):

So it's this fridge magnet, with eyebrows, and it's for people who frequently are home or work from home. And at the times when you want to eat, it makes a hungry sad face. It's eyebrows go up into sad position. And then, at the times that you don't really want to be eating, because frequently people say, "I'm snacking too much or I don't want to," it makes this more angry face. So the more times that you open the fridge, the angrier the face gets. And this is a really simplified interface, right? But it just gets you more. It's this more glanceable and emotionally understandable thing and what you've done, basically, is you've doubled your conscious. Right? You've created an external version of you that reacts consistently to a stimulus that you were already like, "I don't want to do that as much." Because this robot is a reflection of you and you know that, I would argue that it's easier for you to distance yourself from it having its own personhood.

CS (24:55):

Some of these ideas are still going to be quite difficult, in that every time you play the game of it's an external being, it more and more becomes that, but yeah, so that sort of thing. We can build these interfaces that help us, that connect us to our selves and our goals, and that can be, potentially extremely useful.

EW (25:21):

Yeah. But, I mean, the fridge cat was pretty simple. It was a couple motors to control the eyebrows and a door fridge open sensor and then a little controller. I mean, it wasn't super complicated tech.

CS (25:38):

Right. It was not an intentionally emotionally psychoactive interface. It was the simplest thing possible, so that you could build it if you wanted to and you could start using some of the emotive tech principles to help you in your goals. Yeah, I think there's a really good chance that you will increasingly see people building sort of differently contexted and more aggressive emotional technology, and that's going to be a really interesting world.

EW (26:13):

And it's different than gamification. Gamification is more when you create an environment that makes people do things to win. Like my Fitbit tells me I only have 250 more steps before I meet my daily goal. Or even Pokemon was gamification, it got you to walk by letting you find creatures. But emotive tech is more taking your Tamagotchi for a walk or taking your Fitbit for a walk. It doesn't have to be one or the other, but they are different, right?

CS (26:51):

Yes. I mean, I think that it's a similar category, and in some ways you can track the way in which we increasingly perceive gamification as a crazy powerful tool that can be good for people or bad for people. I think that's probably what you'll see with emotive tech, as well. Because sometimes gamification helps people accomplish their goals, it helps them do self-improvement. Sometimes it causes you to spend way too much money in a game with in-app purchases. So you're likely to see a very similar line, I think, for emotive technology.

EW (27:27):

But instead of our desire to win, it would be manipulating our other emotions.

CS (27:36):

Yes. And one of the things that's potentially dangerous about this, too, is that with gamification we can look at an interface and say, "Oh, I know what trick you're using. You're using this gamification trick. You're using a points trick. Right? So you're trying to get me riled up through feeling like this is a game. So I'm not going to play your game." We can do that, you can opt out in some ways. And some people will have more trouble opting out than others, but I think most people would say, "Okay, I can reasonably opt out of that," versus it's much harder to opt out of an actual emotion that you're feeling. It's harder to opt out of the natural instinct for human empathy. And what does it mean if we were able to opt of that? What does that even mean? I built a robot where it struggles and then dies and you feel something about it. If you somebody were using this as a tool to manipulate you, how easy is it to opt out of that? You'd basically be saying, "Well, when I see something struggle and die, I'm just not going to feel anything about it," and I don't know that we want to really go to that place.

CW (28:56):

Maybe you've made a sociopath detector, in that case.

CS (28:59):

Yeah, and it's hard. Again, we shape our tools and our tools shape us. If we start using this really frequently and if we use it carelessly, then you force people to build an immunity. And there's a lot of questions about how that guides society and this is some of the things that I think about, but it's not technically new either. Basically, you've been dealing with this sort of thing ever since we were able to build these automated interfaces. There's a lot of really beautiful research that's been done by Sherry Turkle about this. She has some stories that can just be really devastating. She talks a lot about the effect of technology on children and the effect of questionably alive technology, too. You give a kid a robot and up to a certain age, they're deliberating about whether or not it's alive, and the justifications for whether or not it's alive change.

CS (29:55):

A really young kid might say, "It's alive because it has eyes." And an older kid might say, "Oh, it's alive because it tries to cheat." There's interesting developmental markers that you can see. But for all the kids it causes a remarkable emotional response. So the robot as an interface does not behave, doesn't play by the same rules as a human. And as adults we can look at that and say, "Okay, it doesn't play by the same rules as a human. That's okay, because it's a robot." But for a kid, where the ambiguity about whether it's living or not, living as high, it doesn't really work that way. So there's one story where they had a kid who came in to the MIT lab to interact with Kismet, which was this robot with a very expressive face. And the robot was malfunctioning, and kid was horribly depressed by the robot malfunctioning.

CS (30:57):

And the reaction of the child was just so extreme that the kid felt ignored, the kid felt that this thing that she had been prepared for, that she was really excited about was not going to happen because the robot hated her, it generated all of these feelings about her identity and herself in relation to an interface that didn't go off by accident, and was highly simplistic. I don't know, I mean, it's something that I think is worth being aware of. I don't think that it means that we shouldn't build robots, I don't think it means we shouldn't build emotive robots. I just think that it's important to be aware of the potential effects on people, and try to build them responsibly.

EW (31:49):

Yes, yes. I agree so much. And since I often fall into anthropomorphizing everything, please be careful with your emotive robots. And don't break them in front of me, my god. Moving into the technology pieces, how did you learn the things you needed to learn to make these? And I know your background is not traditional EE, CS, so maybe that's part of that question.

CS (32:22):

I like to think of my background as being in people. Everything that I've done it's been, how do people perceive different things and how do they work with those and how do you communicate with them? How do people communicate with each other? How can I effectively communicate with you? And I feel like the coding and the robotics stuff got layered on top of that, which is interesting. I tried to learn C when I was 13 or 14 and I quickly lost interest. I think because there wasn't enough example based things, I didn't have any ideas of things that I could build. My dad said, "Why don't you build an algorithm that when I put in a number," this is really funny, he said, "I want to be able to put in a number and then I want it to ask me a couple questions and then be able to mathematically figure out and guess what number it was." So I said, "Okay," and I, an hour later or something," came back to him and he put in the number and it asked three questions then said, "Your number was five." And he's like, "Wow, that's incredible." And, of course, I was just passing the input back through the output and asking three random questions.

CS (33:30):

And so I rapidly lost interest, because I'm like, "Well, it seems like there's a lot of shortcuts you can do and I'm not sure what I'm supposed to do with this." Later, when I was in college, I started using programming as a way to do better research, because I was doing these various kinds of scientific research. During college, I was working in a biophysics lab where I had to do a bunch of image processing or these paramecium moving in dishes through MATLAB. Then, after college, I was working in a neurogenetics lab when next generation sequencing was still next gen, so there's nothing written for it, so I had to do a lot of code for that. And I had a lot of questions about how do I make things more interesting? How do I bring weird physical objects into that world?

CS (34:22):

I got really into thinking about that, but I didn't actually start building a lot of things until years later when I moved to San Francisco, and I was working for Particle and doing things with the developer community and using the tools a lot, and really learning to rapidly build weird hacks. So yeah, that's kind of how I ended up doing the things I'm doing, by just building a lot of stuff and thinking about people.

EW (34:54):

And Particle, they make Photon and they make Electron, they make a number of small systems that are internet enabled.

CS (35:05):

Yeah, so they make internet connected dev boards and the software infrastructure you need to run them. So it's a really cool platform, it's easy to use. That's one of the main reasons to use it. It's very fast to get started, and then if you accidentally put the dev board into an enclosure that you can't open, you can flash code over the air, which is really useful. So I frequently use those boards, partially because I know the system really well from working there and partially because I have like 1,000 of them. I worked there and my business partner worked there, and so we just have so many Photons and Electrons in the house, which is great.

EW (35:55):

When you're making a project, what takes the longest, making the project, making the video, or writing the instructions?

CS (36:01):

It depends on the project. I think that the fridge cat video took longer than actually conceiving of and making it, because it was so simple and straightforward, but the coffee maker, when we did the automatic coffee maker, tweaking that 3D print just took forever and it's still not perfect. So, in that case, the build took a lot longer than the video. I think it just really depends.

EW (36:31):

And why do you write the instructions for them? I mean, it's fun to make these things, why do make them open source? Why do you let other people build them?

CS (36:42):

Right. That's a good question. When I was trying to learn all this stuff and, I mean, I'm still learning so much of it, so much of what I do has been assisted and helped by people putting clear instructions online. I would not be able to make the things that I make if people hadn't documented their work. Because a lot of what I do, I don't have formal training, a lot of what I do is I ask one of my friends who does have formal training what I should Google, then I find a bunch of examples and I cobble together what I need. And I actually learn a lot while I do that, so I want to be able to do that for other people. I mean, not to mention that documenting your work is just a really good way to go over what you did and have a record of it for yourself and be able to build it again later when you inevitably forget how and why you did that.

EW (37:41):

What advice do you have for people who have an idea like this and are daunted by the amount of work that it might take? Somebody that wants to build an RFID tag for their pet door or something like that, what advice do you have to keep them motivated and going?

CS (38:03):

Well, one is that you never know until you start. It might be a lot easier than you're thinking or you might find a shortcut that makes it 1,000 times easier. And the other is, there are people everywhere building things, a lot, and so don't be afraid to ask for help, another person might be able to offer you a shortcut. Don't be daunted by the amount of stuff, and if you hate it, I guess you can always stop, but chances are, you're just going to really get into it and it's really rewarding in the end.

EW (38:34):

How do you decide what to work on next?

CS (38:37):

Oh gosh, I have to make these big lists. I'm definitely in the camp of having too many ideas. So I end up having to make these lists and then I have these huge arguments with Richard Whitney, where we're just playing design chicken about different projects that we have. And we just have these huge discussions about what is actually good to work on and what is interesting to work on, both from the perspective of why is it interesting for people, why is it a good or bad product, why does it potentially help people or not help people? A lot of times, I just end up gravitating towards the more emotive side of things. I think I just really like building interfaces that create a feeling. And frequently, joy is the feeling, it's not all the Starfish Cat discomfort experiment and the Fur Worm, I feel bad that those are the ones we talked about. But I like to make things that could make people happy or help them understand something better, so I have a preference towards those, as well.

EW (39:45):

Do you view what you do more in line as engineering and consumer products, or more in line with performance art?

CS (39:59):

I think it's a little bit of both. In my professional life, I definitely am doing more on the side of engineering and consumer products. I do more of the design and visual elements and I'll actually build full rapid prototypes and that's called for, too. But it's, in some ways, I feel like a lot of content and a lot of product is weird performance art. You're trying to create a system that reaches as many people as possible. And then, of course, I do have this kind of weird art side where I'm doing things that feel more performative, and they're mostly separate. But I feel like, in different parts of my life, I do both.

EW (40:46):

The emotive technology part seems very performance art style, to me, because, I mean, that's one of the things about art, is it generates an emotion.

CS (40:56):

Absolutely. And it is when I make the exaggerated versions. The truth is that there's a ton of emotive tech that we encounter every day already, different interfaces that evoke emotions. And there's a lot of really subtle ways that you can insert those into products that people do frequently. And, again, also by accident. Every time an interface moves, we already start to zoomorphize it. It actually does overlap with some of my professional work, but I don't build exaggerated interfaces in my professional life. I just build correctly calibrated ones.

EW (41:37):

Where does HackPretty fit in?

CS (41:39):

So HackPretty was something that I started doing because I missed making content that was more in the education space and more meant to teach people about what they could do, in terms of design and how to do it, in terms of code and hacking. So that's just sort of a thing to do on the side. I like it a lot. Recently I feel like I haven't been as good at putting videos up, so it's just kind of become a repository for different thoughts I have, but yeah.

CW (42:16):

Let me ask you a question about when you're making things for yourself, not for your job, you mentioned you have a pile of Photons lying around, you do some 3D printing and servos. Do you feel like being constrained to what you've got available for tools is helpful or somebody threw you a million dollars, and a completely stocked shop with assistance, would you have things that, "Oh, now I can build this"?

CS (42:44):

Yes.

CW (42:45):

Okay, okay.

CS (42:46):

I would. I do actually also think that design with constraints is helpful. I will frequently, I know you said not in your professional life, but I will frequently ask people for all their constraints, because I think it's more fun to design with constraints. And it's a cool challenge even in my regular hacking life. But there are certainly things where if I had an unlimited budget, I would be trying to build them. I'd be trying to get in contact with the best and most amazing people in various fields to try to ask a million questions then also get them to help build things.

EW (43:23):

One of the problems I've come up with my robot is I need something that is applied robotics, kind of like the computer vision, now you can do into the depth and the math or you can just use OpenCV and follow along on the examples. But robots still seem to need a lot of math, with localization and kinematics and whatnot. How do you find what you need to know? I mean, examples online are one thing, but sometimes you have to go deeper. How do you find these things?

CS (44:01):

Definitely. I ask a lot of questions to people in my local in-person community. I mean, I consider myself very much an amateur in these fields and I know some people who are far less amateur and some people who are highly professional, so I'll frequently ask them questions and I'll also ask them, "How can I do it easier? Is this the wrong way to think about it?" And I always ask, "What should I Google?" I try to shortcut things a lot, because I try to make things that, not in my professional, but in my hacking things together life, I try to shortcut frequently, because the affect that I'm looking for is often more powerful if it's done more simply. So that's not something that generally helps when people are trying to build a capital R robot that does particular tasks, but that's what I do. I say, "What is the easiest way that I can get to that? Or is there any way that I can just encourage the human machine to do that instead?"

CS (45:10):

One of the early iterations of the Starfish Cat, I think we had talked about, because this was one of my design chicken things, where we're like, "Oh, but what if it this? Oh, what if it this?" And I think one of the early iterations had locomotion in it, and then I was like, "Oh, I don't need to do locomotion. I just need to give it an emotive enough movement that people feel like they have to move it to different spots."

EW (45:35):

Well, yeah, and, I mean, if it looks like a cat, people want to pick it up. And if it's doing the little kitty paws, then they think it's safe to pick up.

CS (45:45):

Exactly.

CW (45:46):

And it if rolls over on its back, then the knives come out.

CS (45:54):

Yeah. Oh, sorry. My actual cat just jumped up and did a thing.

EW (46:02):

Do you have any projects that you're working on that you can tell us about?

CS (46:06):

Oh gosh. Let me think. I'm doing some various things with biomimicry. This was a little bit of an older project, but I had, right after I did the talk with the Fur Worm, I wanted to make an updated Fur Worm that wasn't intended to be broken, but still exuded these different minimums. So I did a design for a robot and I printed it out and put it together where it has a very concrete skull, that's actually based off the skulls of ferrets and those sort of shaped creatures, long, fuzzy, tube-shaped rats. So you end up with this critter that ends up having what feels more correctly aligned when we look at is a face and as a living thing, because the skeleton underneath it is actually based off of biological structures. So that was something I was working on. Working on another simple emotive robot that doesn't locomote, but does a similar emotive movement things with just his two little front paws, so stuff like that.

EW (47:19):

Yes. And if you had Christopher's million dollar shop, what would you be working on?

CW (47:22):

I don't that have that. It's imaginary.

EW (47:26):

Imaginary, yes.

CS (47:28):

I would be working on augmentation interfaces as a large category. I think that a lot of what we do now and a lot of what people are excited about with technology in business is automation. You say, "How can we make it easier to do? How can we have something else do it for us?" The question that we're not asking that I wish we would ask more is, "How can we do augmentation? How can we create and use interfaces that make us smarter, that makes us faster, that make us stronger, that make it easier for us to do the things that we want to do, that leave us in the driver's seat and give us control, and allow us to further human progress that way?"

EW (48:10):

Yes. There are many things on that list that I would want to play with.

CS (48:15):

Yeah, it's a big list.

EW (48:17):

Yeah. Christopher, why don't you have that laboratory ready for me?

CW (48:20):

You got to give me a million dollars.

EW (48:23):

Okay, I have more thing to ask you about that's completely in a different direction, I think. You wrote the Cartoon Guide To The Internet of Things.

CS (48:33):

Yes.

EW (48:34):

What's up with that?

CS (48:36):

So when I was at Buzzfeed, I think my title was Internet of Things Fellow.

CW (48:41):

Sorry.

CS (48:42):

Yeah, I know. I found myself frequently explaining to people, because this was, I think, pretty much in the heyday of IOT hype, I found myself frequently explaining to people what IOT actually was, and why there was hype and why it was potentially important and what was potentially good or bad about it. So I decided that I should make a very simplified form, because you would frequently do a search about, "What is the Internet of Things?," and get a lot of company documents that were obligated to talk about in a way that, "It's going to be the hugest thing. We're going to make so much money. It's going to be so amazing. There are the number of devices that are going to come online," and contain no actual information for your mom about what that was. So I wanted to create the document that you could hand to a parent or somebody who is highly not technological, because some people's parents are highly technological, and say, "Okay, this is what it is."

EW (49:43):

And did you draw everything? Is it computer drawn? Hand drawn?

CS (49:47):

Oh no, I drew it, yeah. I like to draw things, so I drew that. I really enjoyed making content at Buzzfeed, there's a couple of the things that I did where I drew all of the graphics for it, just because I didn't want to bother the artist there to draw the graphics for me. Yeah, I'll frequently doodle for fun and that's one of the ways I do design, is to do these visual mockups.

EW (50:21):

I'm interested, because I also doodle for fun, and I have the Narwhal's Guide to the Bayes' Rule, which I don't know if anybody's every read it, but I find it hilarious.

CS (50:33):

That's amazing.

EW (50:36):

And I never quite know what to do with these comics. I mean, you did it for Buzzfeed, so that counts. I put it on my website and then wait for people to comment and they never do.

CS (50:52):

Yeah, it's really hard to get content out.

EW (50:58):

I mean, content's kind of a bad word. I mean, I remember somebody, I was like, "Oh yeah, I have this blog. I have this podcast." And they're like, "Oh, you produce a lot of content." And I'm like, "That's not what I think of it as. I produce things. Oh my god, it is content." All right, well, do you have any questions for us anything that you're working on that you wanted Embedded software engineers' advice or anything?

CS (51:29):

Oh gosh. I mean, I'm sure that what I do is way too simple to actually get advice on. I think one of the things I ask everybody is, is there anything you think I should read?

EW (51:44):

Yes, so many things. Christopher, do you want to answer first while I collect my thoughts?

CW (51:52):

What makes you think I'm going to go any faster? Things you should read? Wow.

EW (51:59):

I mean, there's The Way Things Work, and that's been updated.

CW (52:02):

How Things Work.

EW (52:03):

How Things Work.

CW (52:03):

Or is it The Way Things Work?

EW (52:04):

I think it's The Way Things Work. Because that's just so nice on describing little things, and it is amazing how quickly the little things build into the big things.

CW (52:15):

There's the book on making things move.

EW (52:23):

Yeah, I just put that in the don't read pile.

CW (52:25):

Don't listen to us.

EW (52:31):

So not that one. Let's see. For things like this-

CW (52:33):

What did you read when you went through your robot arm stuff?

EW (52:41):

I read a lot about the Robot Operating System, which is very complicated, and probably not that relevant, but the idea that there are thousands of people working on small modules for robots to help them do all the things a robot needs to do, is pretty powerful. But it requires pretty big processing, because it's all distributed and not efficient for small things. I mean, science fiction is always where I go, because it's got all the good ideas and it shows the pathway for some of these emotive technologies leading to bad... I mean, we've read the story, right?

CS (53:22):

Yep.

EW (53:24):

Seen the movie, got the t-shirt. Yeah, it's all there. Okay, so the book that I would suggest most, aside from the technology and science fiction and all of the data user manuals, blah, blah, blah, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness by Peter Godfrey-Smith. So the idea, I mean, I read a lot about octopus, I don't know why, I just really like the idea of them. But in this book, he talks about the history, the natural history parts, the science, but also the idea that their brains are huge. And yet, they're completely alien from us. And this idea of the alienness was striking to me, because if we ever do meet aliens, we might need to take on this idea that they're not like us, and that's fine and that's good, but you can't assume that something's stupid just because it's different. And the way octopus intelligences work is just miraculous, but I think that's a whole show or at least a blog post.

CS (54:38):

Yeah, that sounds awesome. I'm definitely going to read that.

EW (54:43):

Yeah. I mean, I love books, so we could be here all day with the books, but maybe we should actually go about our weekend, which would be nice. Because if the weather's as good there as it is here, it's going to be an outside weekend.

CS (54:56):

Yeah, it's looking good today.

EW (54:59):

Chris, do you have any other questions?

CW (55:00):

Well, you just said we want go about our weekend.

EW (55:02):

All right, all right. [crosstalk 00:55:04]. Christine, do you have any thoughts you'd like to leave us with?

CS (55:07):

Oh gosh. I guess some of the last ones that we talked about, which is, when you're building something, try to focus on augmentation over automation. It will be better for you and also the rest of the humans.

EW (55:22):

Excellent. That's always appreciated by the rest of the humans. Our guest has been Christine Sunu, creative director at flashBANG Product Development and creator of HackPretty. Thank you for being with us Christine.

CS (55:38):

Thank you.

EW (55:39):

We'll have show links to many of the things we talked about including Christine's blog and her company. I would like to thank Christopher for producing and co-hosting. And, of course, thank you for listening. You can always contact us at show@embedded.fm or hit the contact link of embedded.fm.

New Speaker (55:57):

A thought to leave you with, as long as I'm on the idea of octopus and cephalopods. I've been reading a lot about cephalopods and their big brains. Take cuttlefish, they have skin that changes color, but they don't seem to be able to see color with their pretty advanced eyes. Instead, it's like their chromatophores, the color changing mechanisms on their skin are wired into their brains without any conscious control. So when you look at a cuttlefish's skin, you may be seeing their brain waves. You can see their thoughts.

EW (56:34):

Embedded is an independently produced radio show that focuses on the many aspects of engineering. It is a production of Logical Elegance and Embedded Software consulting company in California. If there are advertisements in the show, we did not put them there and do not receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.