253: We'll Pay Them in Fun
Transcript from 253: We'll Pay Them in Fun with Kathleen Tuite, Elecia White, and Christopher White.
EW (00:00:07):
Hello, this is Embedded. I am Elecia White, here with Christopher White. And this week here also with Kathleen Tuite. We're going to dive into computer vision, augmented reality, games and meetups.
CW (00:00:22):
Hi Kathleen. Thanks for joining us.
KT (00:00:23):
Hello. I'm happy to be here.
EW (00:00:26):
Could you tell us about yourself as though we met at a technical conference?
KT (00:00:30):
Okay. My name is Kathleen Tuite, as you just said. And I am currently a software engineer at a computer vision AI company called GrokStyle. My background from this current job and past projects, involves computer vision, game design, crowdsourcing, human, computer interaction, all those things kind of wrapped up together. And what I really like doing in general is taking interesting computer vision systems and building interactive other things around them that people can actually use and play with.
EW (00:01:05):
Yes, we have so much to talk about it.
KT (00:01:06):
Yes.
EW (00:01:08):
First we have lightning round. You've heard the show, so you know -
KT (00:01:11):
[Affirmative].
EW (00:01:11):
- that the goal is fast and snappy.
KT (00:01:15):
Okay.
EW (00:01:16):
Do you want to start? Okay. Christopher is shaking his head, which goes over well in podcast land. Minecraft or Pokemon Go?
KT (00:01:26):
I like both of them a lot.
CW (00:01:32):
Favorite open CV function.
KT (00:01:34):
I don't like open CV as much.
CW (00:01:37):
Least favorite open CV function.
KT (00:01:38):
One thing I do like about open CV, I just like reading in an image and then also displaying it. So the ones that read images, the ones that show stuff, those are probably my two favorites, just so I know what's going on and that I can, you know, get started and build something on top of that.
EW (00:01:59):
Favorite computer vision library then?
KT (00:02:03):
So when I was a grad student, I did a lot of stuff with this tool, this system called Bundler, which is a Structure from Motion pipeline. So I'm pretty fond, I have a love-hate relationship with Bundler. And now there's a Python version of it called open SfM that is run by this company called Mapillary.
EW (00:02:26):
Open SfM. Okay.
KT (00:02:27):
[Affirmative]
EW (00:02:29):
Is it my turn or yours?
CW (00:02:30):
It's my turn.
EW (00:02:31):
Oh, go go go.
CW (00:02:31):
Favorite VR game.
KT (00:02:34):
I went to like a capstone demo, Sammy Showcase at UC Santa Cruz. A bunch of games that students have been working on this past year. And I played this ghost cat VR game where you're a little ghost cat and you have another buddy ghost cat, and you have to stay near each other 'cause they're your source of illumination. And then you have to jump around and get to other places. And so the ghost cat VR game.
EW (00:03:05):
That no one else can get to.
KT (00:03:06):
That no one else - I mean, they're trying to make a research platform out of it. So maybe you can play it soon. I dunno. That's the only recent thing that I can think of.
EW (00:03:17):
Tip you think everyone should know?
KT (00:03:20):
Stop writing bugs.
EW (00:03:26):
Just stop it?
KT (00:03:28):
Okay. So a long time ago I was programming stuff with my husband, my boyfriend at the time. And I was just kind of being sloppy with what I was doing and I'd write some code and then I'd run it and then I'd read the error and be like, "Oh, I spelled that thing wrong. And they're just this really slow process. And he was like, "Just stop writing these bugs." And I was like, "Okay, I'm going to try, I'm going to be more mindful about this and just like go a little bit slower and think 'I'm a human, I can do this as best I can.''" And try to just not write the bugs in the first place. And then whatever errors do come up are, you know, they're still there for me to figure out, but the really basic ones I can kind of just try not to do that.
EW (00:04:18):
Yes. I know what you mean. That's when, monkey coding is what it is for me when I do it, where I'm just like, "Oh, I'm just going to type it this until it works."
KT (00:04:28):
Yeah.
EW (00:04:28):
And I'm not going to sit there and think about how it should work and how I can get from where it doesn't work to where it does work. I'm just going to keep incrementing this variable until the timeout is the right length.
KT (00:04:40):
Yeah. Yeah. Just stumbling through it and that works sometimes, but -
EW (00:04:46):
Thinking about it actually is kind of better, actually better.
KT (00:04:52):
Yeah.
EW (00:04:52):
In so many ways.
KT (00:04:53):
Yeah.
EW (00:04:54):
Okay. So stop writing bugs. That's great advice. Computer vision.
KT (00:05:01):
Yes.
EW (00:05:02):
When people say computer vision, what do they mean?
KT (00:05:06):
So I would say computer vision is the ability for a computer that's gotten some sense or some picture of the real world. Maybe it's like a picture from a normal RGB camera. Maybe it's a depth sensor, some more enhanced picture. The computer's ability to make sense of that and understand what is going on in that scene, whether it's recognizing the objects in the scene or the activity that's happening.
KT (00:05:34):
Or just more information about what the scene really represents, a facial expression that someone has or who the identity of a person is. Those are all computer vision things, the ability to understand things about the real world from a picture or a movie or something kind of like a picture, like a depth.
EW (00:05:57):
I like the way you put it, because it is about the computer not just acquiring the data, but being able to do something, you said, understand, which computers don't usually do, but it's that level. It's an intelligence.
KT (00:06:13):
Yeah. Yeah. To make sense of it enough at some, whatever understanding level is possible, that then you can actually use that in some other system.
EW (00:06:24):
You did graduate research. And I know I'm not going to get the word right. Photogammetry?
KT (00:06:31):
Photogrammetry.
EW (00:06:32):
Photogrammetry. I had missed the R. Okay. What is that?
KT (00:06:38):
So photogrammetry is the ability to get a bunch of images of one thing all from different angles and kind of come up with the 3D structure of the item that all the cameras are looking at. And also, the pose of each of the cameras, how they relate to one another and to the object themselves.
EW (00:06:59):
And so if I go and I take a picture of the Eiffel Tower, and then take another picture of the Eiffel Tower, you can build the Eiffel Tower in 3D from my photos?
KT (00:07:12):
Yeah, yeah.
EW (00:07:14):
But two isn't enough.
KT (00:07:15):
No two will... if they're close enough that they're seeing roughly the same view, but they're also spread out enough that they're not exactly on top of each other and you can get some kind of 3D information from them being split apart. You could know where the pictures are, where the Eiffel Tower is, but if you want to go further and get a fuller 3D model of the Eiffel Tower, you'd want many pictures of it from many different angles. And that might be enough to fill in the actual structure of that object.
KT (00:07:50):
Although the Eiffel Tower has a lot of cross braces and things where you can see through it. And...that will probably be a little bit challenging for the computer to make sense of something -
EW (00:08:05):
How about the Washington monument?
KT (00:08:08):
The Washington monument -
EW (00:08:10):
That has not enough detail, right?
KT (00:08:11):
Yeah. And it kind of looks the same from all four sides. The tall one. The canonical examples of this where Structure from Motion photogrammetry sort of became a thing that other people started really running with was this photo tourism project at UDub where they took a bunch of photos from Flickr of popular tourist places like Notre-Dame Cathedral, and Trevi Fountain in Rome and used those photos. So those are places that there's enough texture and structure there, but it is kind of this continuous, this surface that you can't see through like the Eiffel Tower.
EW (00:08:54):
So it's better if you have something that has a lot of detail, but not see-through and not repeating detail.
KT (00:09:03):
Right.
EW (00:09:05):
This is a lot of caveats.
KT (00:09:06):
It definitely is. Yeah.
EW (00:09:08):
Okay. And then, so the way I think it happens, my mental image is you have picture one and then you take picture two and you try to map up all of the same points. And then you take picture three and you map up all the same points on picture one and picture two, but then I'm kind of lost. I mean, I know you can use convolution to map up points that are the same, but how do you, what happens after that? Is that even right?
KT (00:09:40):
That is totally the first step of getting two images or more images or pairs of images in a whole big collection of images and figuring out what all the interesting points in these images are, and then matching them with each other. And then you have a bunch of, "I've seen this particular point in these two images or these 10 images. And it was in these pixel coordinates of these images over here."
KT (00:10:08):
And you just have a whole bunch of data of those correspondences, and then you throw it into something called bundle adjustment. And that will figure out the 3D positioning of where all those points should be in 3D space and where the cameras should be, what pose they should have based on all these, camera pinhole, math equations there.
EW (00:10:34):
Okay. We're going to ask you about that too.
KT (00:10:35):
Okay.
EW (00:10:35):
So don't get comfortable with me skipping that, but even that first step, are you using the RGB images or are you trying to find vertices?...What kind of algorithms do you use to even find the points and then what does this bundle thing do?
KT (00:10:55):
So the algorithm to find the points initially, SIFT is a good one. And I know, I think your typing robot uses these same SIFT feature points to figure some stuff out.
EW (00:11:10):
It does.
KT (00:11:11):
Yeah?
EW (00:11:11):
It does, but when I did it, I just used open CV and it magically worked. I have no idea what the algorithm was. That was part of when I was trying to figure out where the keys were and I had a perfect image of a keyboard. And then I had my current camera image of the keyboard and it was SIFT and FLANN and homography, and I just typed them in and wow. It just found them. And I did nothing. Even when I changed the lighting, it was pretty good. So -
KT (00:11:47):
[Affirmative]. Okay.
EW (00:11:47):
What does it do?
KT (00:11:48):
So to break it down a bit more, SIFT stands for Scale-Invariant Feature Transform -
EW (00:11:59):
"T". Transform. Yeah. Transform sounds good.
KT (00:12:01):
And basically for a computer to start understanding an image, one of the things that it can understand is little corners of things and images, where, say you have a picture of a building and there's a window sill. And the part where the outside of the window sill comes together at an angle, and it's on top of this, there's a brick facade or something so that the sill and the brick are different colors. And the light is casting shadows in a certain way. That particular corner of that building might have, or will have a distinctive look.
KT (00:12:46):
And the SIFT feature of that particular point would capture something about the colors there, but more importantly, the edges and what angle they're at and how strong, how edgy they are, how corner-y they are. And the scale and variant part of SIFT means that if you have a picture of that window sill up close, and you have another one that's maybe far away, and maybe it's rotated a little bit, that particular piece of those two images will still look very similar. It will have a descriptive way that the computer can represent it, that it can tell that they're the same point.
EW (00:13:28):
Okay. Okay. So now we found all of these correspondence points, correspondence?
KT (00:13:34):
Yeah. I mean, they start out as just, these are feature points. These are points of interest. These are little corners or things that a computer can can say, like, I know what that is, I know where it is. Versus on a plain blank wall. There's nothing special about a pixel in the middle of that space, it could be anywhere. And then when you have multiple images, like two images that both have SIFT points, and you kind of figure out the correspondence between them, that's when the correspondence part comes in.
EW (00:14:03):
And so I can sort of understand with two images, 'cause that's kind of how my eyes work, it's 3D vision.
KT (00:14:11):
Right.
EW (00:14:12):
And if my eyes were further apart, you know, if I had a really big head, I would be able to see 3D vision further away. But right now, after about 10 feet, everything's kind of flat.
KT (00:14:25):
[Affirmative].
EW (00:14:25):
I know there's actual math that would tell me how far it is, but realistically I'm pretty cross-eyed. So 10 feet is really about it for me, don't play basketball with me. And so when you have two photos taken far apart, then you can get more depth.
KT (00:14:45):
Yes.
EW (00:14:47):
But my eyes work because they're always in the same place, they always have the same distance between them. It seems like a chicken and egg problem that you can find these points and you can find the 3D-ness of it, but you also find where they are...which one's the chicken and which one's the egg and which one comes first?
KT (00:15:07):
Hmm. So you're totally right that our eyes, our brains have calibrated the fact that these eyes that we have are always in the same relative position to one another. And I think 3D reconstruction techniques from two images have existed for a while. And they've started out with, "We need to calibrate these two images relative to each other first, they're going to be mounted on some piece of hardware and they're never going to change. And if some intern bonks them, then they have to go recalibrate the whole thing". And -
EW (00:15:42):
Yeah, I remember doing, yeah. I think maybe I was that intern.
KT (00:15:46):
And they have these calibration checkerboards that you can set up -
EW (00:15:51):
Yeah.
KT (00:15:51):
And there's probably some open CV function for, "look at this checkerboard and figure it out," figure out what the camera is.
EW (00:15:58):
There totally is.
KT (00:15:59):
Yeah. So getting from two cameras where you've calibrated them already, and also you have to calibrate the internal lens distortion and all of that, of a camera, and that's where the checkerboards come in. But having more cameras, yeah, you need to figure out "What is the 3D structure of the points I'm looking at that will help me figure out where the cameras are?" Then also you need to figure out where the cameras are to figure out where the 3D points that you're looking at are.
KT (00:16:30):
And what this bundle adjustment technique is...you had mentioned homography or alluded to it. Homography is an initialization step of, there's two cameras, and they're looking at the same thing. And if that thing is a planar surface, it's kind of understanding the relationship between those two cameras.
EW (00:16:56):
Yes, in my typing robot I have the keyboard,...the perfect keyboard, and then I have my scene of whatever. However I've put the camera up today. And then, the homography, I take a picture and it maps the escape key onto the escape key and the space key onto the space key. And then it gives me the matrix that I can use to transform from my perfect keyboard world to my current image world. And so that matrix, I can then just use to transfer coordinates between them.
KT (00:17:34):
[Affirmative]. Right. So if you have all these pictures that tourists took of the Eiffel Tower, you can look at the pairs of cameras and look at the SIFT correspondence points that you found between them and kind of estimate a homography by, "What is that matrix that says how this one camera moves to become the other camera?"
KT (00:17:57):
And it might not be perfect 'cause the points that you're looking at in the world, there's maybe stuff that you don't have enough information yet. You don't know what the internal camera parameters are for that particular camera, but you can get some initial guess. And then, what bundle adjustment does, is takes all of these initial guesses of how all these cameras and points and tracks of points seen by multiple cameras fit together. And it comes up with an optimization that solves for both of those things at the same time.
EW (00:18:32):
So it takes all of the correspondence points for each pair, and then it minimizes the error for all of them?
KT (00:18:40):
Yeah.
EW (00:18:41):
And so if you end up with a bogus pair, like on my keyboard, if I was mapping A to Q,...if I took a bunch of pictures, it would eventually toss that one because nobody else agreed with it.
KT (00:18:56):
Yeah. It might toss it or it might be like, "This is, I think this is right." And it might just be wrong.
EW (00:19:03):
And then it skews everything.
KT (00:19:03):
Yeah. So in this project that I worked on in grad school called PhotoCity, which was a game for having people take lots of photos of buildings and make 3D models, I saw a lot of this 3D reconstruction stuff gone wrong. Where a person would take photos and the building, the wall of the building would grow, but then it would just curve off into the ground or...the model would just totally flip out and fall apart.
KT (00:19:35):
Because...this bundle adjustment, this effort to kind of figure out cleanly where everything goes, would just get really confused. Or sometimes there'd be like itsy-bitsy, teeny-tiny, upside down versions of a model that were really close. 'Cause the computer was like "This makes sense to make a tiny version of this building here. It kind of looks the same as having one that's really far away."
EW (00:20:02):
Yeah. And when you get a discoloration in a building that has bricks, and then you end up with the small discoloration of the bricks and it can't tell the difference because, size and variant.
KT (00:20:12):
Yeah. So -
EW (00:20:17):
And so -
KT (00:20:18):
Computers, man, they mess up sometimes.
EW (00:20:20):
When you do the minimization problem of finding all the matrices, which gives you the 3D aspect, that's when you can start figuring out where the people are.
KT (00:20:30):
Right.
EW (00:20:30):
Because you can backtrack, once you're confident that these points are in this space, you can backtrack to where the camera person must have been.
KT (00:20:39):
It's doing both at the same time. I'm kind of going back and forth between optimizing where the points are and optimizing where the cameras or the people holding the cameras must have been. And you can say, "I have a pretty good guess of where the 3D points out in the world are. But if I wiggle the cameras around a little bit, then we'll come up with a better configuration that minimizes that error even more." And that the error that we're trying to minimize is, "Do these points in the world, do they project back onto the right pixel coordinate of the image? Or are they off?" We're trying to sort of get everything to make sense across all these different pictures.
EW (00:21:24):
And in the end, this is a massive linear algebra problem.
KT (00:21:28):
Yeah. Pretty much.
EW (00:21:30):
That's weird. I mean, it sounds like...you put photos in, you get locations and 3D out. And so it sounds so smart, but in the end, it's just like massive amounts of a + b, x + cy.
KT (00:21:50):
Yeah, yeah. It's totally magic that this is possible, but it's also totally not magic. It's just a bunch of math.
EW (00:22:00):
It used to be whenever we were doing computer vision stuff or machine vision or whatever we were calling it, there was the requirement that things be lit very brightly.
KT (00:22:11):
[Affirmative].
EW (00:22:11):
That went away. Why did that go away? How did that go away? That was the core thing with object identification and location. When did lighting stop mattering? Or does it still, and I'm just using better tools?
KT (00:22:27):
There can be a number of things involved. Lighting still matters, but SIFT is pretty good at matching things, even when the lighting is a bit different. Another big thing might be that the quality of cameras that we have is better now...the webcam that you have, or the camera on your phone, or the camera that's built into your laptop...they can work better in lower lighting, crappy lighting. They will also just take clearer pictures.
KT (00:23:03):
So I imagine that it was more critical in the past because...the cameras just couldn't see very well. And so you really had to make it easy for the cameras. And then a third aspect is that we have a bunch of data online, taken by cameras. And so there's a lot more that we can do to say crappy cameras, not very good cameras. And we can learn more from all of this data that's available. So we can kind of compensate for the fact that the lighting might not be as good. 'Cause we've seen enough examples of something with not very good lighting, that we can still understand what it's supposed to be.
EW (00:23:51):
It's interesting that it's the camera technology that is one of the drivers I hadn't - yeah. Makes sense.
CW (00:23:58):
Well, that's probably the application too. 'Cause if you're doing like a manufacturing thing, you want everything to be exactly the same all the time. So, "Okay, we have good lighting and we know the lux and everything, every time, just know the circumstances don't change." Whereas for the more general vision application, you might be taking pictures anywhere. And so you have to be able to adapt.
KT (00:24:19):
Yeah.
CW (00:24:19):
If you don't have to be able to adapt, then it's easier. Right?
KT (00:24:21):
Yeah, yeah. Because...it's working well enough as technology, taking a picture and adding to a model or taking a picture and recognizing some object in it. And those are getting into the hands of consumers. You're totally right that now people want to use that in a wider variety of applications. So it's kind of pushing the limits of "We need to work on making this better. We need to work on making it still figure out what it's doing, even if it's some random person taking a picture in their dark living room."
EW (00:24:52):
And I think that has gone back to the manufacturing areas, that even there, you don't need the bright lights because we've learned to adjust to people taking crummy pictures.
CW (00:25:03):
It's cheaper not to have to do that. You can use consumer level stuff. Yeah.
EW (00:25:07):
Yeah. Okay. So at the end of taking a bunch of pictures, you get a bunch of points on your Notre-Dame -
KT (00:25:19):
[Affirmative].
EW (00:25:19):
- or your Eiffel Tower, although we agreed that was kind of iffy. And then you get the location of the people.
KT (00:25:26):
[Affirmative].
EW (00:25:28):
Which one is more important, and what do you do with it then? I mean, part of me is like, "Oh, this is a surveillance thing. I should never take another photo in my life."
KT (00:25:42):
The locations of the camera is probably a lot. There's more information there. 'Cause you can understand where the people were, who was taking these pictures, where they were standing, where people can go...the points themselves, there might not be enough of them to really do something. The points on Notre-Dame or the points on the Eiffel Tower.
KT (00:26:10):
It's kind of like, okay, now we have a crummy point cloud of this place. And we could just get our 3D model of that object another way. But then to know where all the humans were standing... there's a project that was a follow-up to this photo tourism project of looking where people walk in the world when they're taking pictures of things. And they made a little map of people walking into the Pantheon and where most people took photos.
KT (00:26:41):
And you could see that you'd walk in and you kind of go to the right. And lots of people would take photos right when they got in of the ceiling and other stuff. And then they'd walk around and the amount of photos that they took kind of trailed off collectively, 'cause people just got it out of the way at the beginning.
KT (00:26:54):
And I went to the Pantheon in Rome and I was like, "I've never been in this building, but I know what to expect, where people are going to flow in this space, and where everyone's going to be taking pictures," and sure enough, you go inside and you're routed around to the right in a counterclockwise position. And all these tourists are pointing at the ceiling in the beginning. And not so much at the end.
EW (00:27:19):
Museums could use this to figure out which artworks are getting the most attention.
KT (00:27:25):
Yeah.
EW (00:27:25):
I mean, I guess just the number of pictures taken of each artwork. But where people stand, there are a lot of times where, how the crowd moves is an interesting problem.
KT (00:27:36):
[Affirmative].
EW (00:27:36):
But that was not what I asked you here for. Now I totally want to talk about that. Building the 3D models. That was what you were doing. You were taking the point clouds and making 3D models, right?
KT (00:27:55):
Yeah. I mean, I was building this game, this crowdsourcing platform around this Structure from Motion system where people could be empowered to go take pictures wherever and make 3D models of wherever. So in some sense, it was about getting the 3D models, but it was also about just, how do we get an entirely new kind of data that doesn't exist online already.
EW (00:28:24):
But, that data does exist online.
KT (00:28:28):
Not really. We have a bunch of pictures of the front of all these fancy tourist buildings, but we don't have enough around the side...people aren't going to be walking down some alley, taking a bunch of pictures on their vacation, unless they're playing PhotoCity... or they're doing some other crowdsource street view thing, like Mapillary, which I mentioned before. But the data, it's not there. There's gaps in what people have taken just of their own accord and posted online.
EW (00:29:03):
This is something that I have heard you speak on some, that the data we have for so many things is, I mean, biased even visually, but biased in all kinds of ways with gaps.
KT (00:29:18):
[Affirmative].
EW (00:29:18):
And you want to gamify filling in the gaps.
KT (00:29:24):
Yes.
EW (00:29:27):
That's cool? Weird. Strange. Cool. How do you have you convinced humans that they should help their robot overlords?
KT (00:29:43):
By helping your robot overlords get more data and understand just the world around them better. There can be better applications built for humans to use in our daily lives.
EW (00:30:03):
Give me examples of gamification of this sort of thing.
KT (00:30:08):
There's two tangents here. One part is about gamification. And one part is about how...applications built on data, AI applications, there's data out there, and then people will try to use it and it works for some things, but it doesn't work for other things. And...there needs to be more data that directly relates to what a person is trying to do. And because there's some system of some human trying to do something and, an AI system isn't working for them, or it works sometimes, maybe that can turn into a fun game.
KT (00:30:45):
Like "What is a computer good at knowing, what is it not good at knowing, how can I stump the computer?" So an example of things that, they may not be called games but they're kind of game-like, a couple of years ago, there was this how-old robot, age-guessing thing that Microsoft put out where you uploaded a picture of your face and it found the face in the image and then it estimated an age for that.
KT (00:31:15):
And...the internet loved it. Because it would either have some really accurate response or it would have some really hilariously wrong response. Like, "Oh, this picture of Gandalf says he's like 99 years old, hahaha. Or this picture of me says I'm way younger than I actually am, how flattering," or kind of funny things like that. People found ways to play with it and figure out all its limitations and what its capabilities were. And they kind of had this communication around it.
EW (00:32:01):
Yeah, last week we talked to Katie Malone about AI and one of the things we talked about was fooling the AI and the Labrador puppies and the chicken images -
KT (00:32:15):
[Affirmative]. The fried chicken.
EW (00:32:15):
- where the AI is confused as to which things are dogs.
KT (00:32:19):
Yeah.
EW (00:32:19):
And there's a whole set of dogs or not. Yeah.
KT (00:32:21):
Yeah. The Chihuahuas that look like blueberry muffins.
EW (00:32:25):
I loved those. Although when I told a Chihuahua owner that their dog was a cute blueberry muffin, they totally didn't get it.
KT (00:32:35):
Aw, man.
EW (00:32:35):
Yeah. Okay. So there's the fun aspect of making fun of the computer?
KT (00:32:42):
And also trying to help it along and like, "Oh...I want to help teach you to do better." And if we can kind of elevate what computers are capable of, there might be areas where then we are suddenly more powerful, more capable because now we have these better trained tools at our disposal.
EW (00:33:07):
Okay. So there's the aspect of wanting to train slash one-up the AIs and then there's straight up gamification.
KT (00:33:17):
Yeah.
EW (00:33:18):
That's where you compete with other people to provide the AI with more information.
KT (00:33:24):
Yes. So there's a history of gamification, especially regarding data collection. There's a series of games or there's like a genre, called GWAPs or games with a purpose.
EW (00:33:44):
GWAPs. Really?
KT (00:33:45):
Yeah.
EW (00:33:45):
That's how we're going to pronounce that?
KT (00:33:46):
Yeah. Yeah.
EW (00:33:47):
I thought it was G-WAPs.
KT (00:33:51):
I have heard G-WAPs.
EW (00:33:51):
Okay. Games with a purpose.
KT (00:33:54):
Yeah.
EW (00:33:54):
Okay.
KT (00:33:55):
And I actually...I've built games with a purpose, but I also am highly critical of games with a purpose and gamification and when it's done shallowly and when it's, "Oh, we'll just sprinkle points and leaderboards and badges on top of something to try to get people to do this task for us for free, we'll pay them in fun." And sometimes it's not fun. The game wasn't designed very well, it doesn't make sense to be a game. There's many cases where maybe you should just build some tasks on Mechanical Turk and pay people fairly to do that task instead of trying to go in this roundabout game way.
EW (00:34:48):
Okay. So, you're ambivalent about gamification and I totally understand that. What would make it be done well? I mean, what are the hallmarks of actual fun?
KT (00:35:02):
So, okay. There's a book by a Raph Koster called The Theory of Fun. And one of the ideas of that book is that learning is what makes games fun. There's some pictures in the book, it has lots of pictures. It's got kittens rolling around and it says "the young of all species play" and kids and kittens and puppies are playing but they're learning a ton as they're playing.
KT (00:35:31):
And...I think a thing that...almost basically every game has, is you're learning the mechanics of that game. You're learning the rules, you're learning the system and you start out not knowing that game, but that game will help you gain the skills that you need to do more interesting things in that game. And this also fits into this theory of flow by this guy with a name that I can't pronounce. It's like "Chiks"...it has a lot of "C"s and "Z"s and"H"s and stuff in it. And I can look it up later. But this idea that -
EW (00:36:12):
Wow, that is a lot of, "Mihayli"?
CW (00:36:16):
Csikszentmihalyi?
EW (00:36:17):
Yeah. Flow: The Psychology of Optimal Experience. Okay. Sorry, go ahead. [inaudible]
KT (00:36:24):
Okay. Yeah. I'm glad you all tried to pronounce that.
CW (00:36:28):
I didn't do a very good job.
KT (00:36:32):
So in a lot of more basic gamification, there might not be anything interesting that the person is learning or there's not any skill that they're trying to practice or get better at. And I think that's when I get kind of suspicious and judgmental and...how is this fun if the person isn't learning something here. Maybe they're learning to game your gamification system instead of actually doing the task that you want them to do.
KT (00:37:07):
So having skill, having something that a person is learning over time that they're getting better at, that they're interested in getting better at, and also -
EW (00:37:18):
You're making me judge the games I play so hard right now.
CW (00:37:22):
Well, games for game's sake are a different category, right?
EW (00:37:28):
Well, I have been playing a game on my fitness thing that now I'm judging very badly.
CW (00:37:39):
Oh.
EW (00:37:39):
I like the idea of learning in games. It makes sense to me. I mean, when you think about Minecraft, that was all about learning.
KT (00:37:49):
Yeah.
EW (00:37:49):
It was all about learning the world and learning how the rules worked and even then learning more about how to make things in it that you wanted elsewhere.
KT (00:38:02):
Yeah.
EW (00:38:02):
And as I think about some of the other, even silly games, I play like Threes, which I think is 2048 in other places. But, there are times when I'm still learning the rules on this game that I have played for so long because it's like, "Okay, I think right now this is what's going to happen." And whether it does or doesn't, yeah, okay, I totally get the learning, now, can they teach me useful things?
KT (00:38:28):
Yeah. Yeah, totally...so one of the original games with a purpose was this game called, almost going to call it Duolingo, but I'm getting to that. It was called the ESP game and it was a data collection game of, two random people on the internet are shown the same picture and they can't talk to each other, but they have to come up with the same words to describe that image.
KT (00:38:55):
And if they match what the other person is saying, then that becomes a label for that image. So the two people will see a picture of sheep in a green field. And so they'll type sheep, green, field, sky, clouds. And some of them may type like -
EW (00:39:10):
Idyllic.
KT (00:39:10):
-butts or something.
EW (00:39:12):
Yeah.
KT (00:39:13):
And then the other person will be like, "Well, I didn't type butts 'cause I wasn't thinking of that. I was thinking of the sheep." And so the ones that match up, yeah, like idyllic...those will become the labels for that image, and that had this game mechanic of am I going to figure out the words to describe this that another human will also come up with the same words?
EW (00:39:36):
Yeah. If you're sitting there identifying the sheep species in Latin, that may not be what the other human does.
KT (00:39:45):
Yeah. You may not -
EW (00:39:46):
You may be right, but you may not be winning points.
KT (00:39:49):
Exactly. So you won't go with those labels. You'll find the ones that are more common and shared. And this game was by this guy, Luis von Ahn. And because it was making image labeling fun through a game, it kicked off this whole series of other games with a purpose and then other people's kind of...they didn't get the mechanics quite right in a way that, I dunno...some things that came after I just felt like weren't good games. The mechanics of the game didn't match whatever the purpose was trying to do.
EW (00:40:26):
You just can't throw points at people.
KT (00:40:30):
No.
EW (00:40:30):
You have to give them more than that.
KT (00:40:33):
Yeah. At least a little bit more. I mean, points, sometimes they work enough that people keep trying it. They're like, "Oh, I do like to see my name on a leaderboard." But not everyone is like that. And there really needs to be something deeper where the person, by playing the game, is actually contributing to whatever underlying scientific or data cleanup purpose. Otherwise they may just be racking up points, but not actually helping you out.
CW (00:40:59):
It sounds like to properly design a game, you actually need to have some psychological understanding to know what motivates people.
KT (00:41:08):
Yeah.
CW (00:41:08):
And also, if you just do a naive thing, like you're saying with points, you can end up with these holes, like you said, where the game goes off in a different direction and people figure out ways to game the system.
KT (00:41:21):
Yeah.
CW (00:41:21):
And you don't get the data you want.
KT (00:41:23):
Yeah, exactly. Having the mechanics aligned with the underlying purpose is super important. But you asked about, "Can I useful things from these games?" And what Luis von Ahn is doing now, probably other things, but one of his main things is this app called Duolingo for learning new languages. And...it's not a straight up game, but it has a lot of elements of a game, ramping you up in a very gradual way.
KT (00:41:56):
And the idea of Duolingo in the first place was there's a bunch of texts on the internet. We need to translate more of the internet. Wouldn't it be great if we had that? And this was before automated translation techniques were good enough to use. So we need humans to do the translation, but maybe people aren't skilled in translating between English or obscure language one and obscure language two, or even English and some other obscure language. And maybe not obscure, but any pairs of languages. And so this idea of, "Maybe we can just teach people new languages and then they can start to help translate stuff on the internet."
EW (00:42:44):
Yeah. I can totally see this working, because for me it would be probably English and Spanish or English and French. And...you could give me an English phrase with an idiom in it and I would have to go figure out how to say that in Spanish in a way that represented the idiom part of it, as well as maybe the words part of it.
KT (00:43:08):
[Affirmative].
EW (00:43:08):
And that would force me to go learn more Spanish, which is something I always want to do. And it would help other people, that if multiple people translated it similarly, then you can start saying, "Oh, this is probably a reasonable translation."
KT (00:43:25):
Yeah, exactly. And then by being in this process where you're learning a little bit of new skills and then applying them, you'll be able to translate more, more effectively, and you'll just kind of grow and grow and grow in what you know, and what you're able to do.
EW (00:43:42):
And even if you presented me with, "These are five things that other people said, which of this is right?"
KT (00:43:49):
[Affirmative].
EW (00:43:49):
You could do that and I would play and learn.
KT (00:43:51):
Yeah.
EW (00:43:51):
And not care so much about just points. It would be about fun.
KT (00:43:56):
Right. And learning.
EW (00:43:58):
And learning. Alright.
KT (00:44:00):
So now Duolingo is a free, sometimes ad-supported app that you can use to learn new languages. And...I don't know how much the translating stuff on the internet plays into it anymore, but it's this accessible language learning tool that seems really great. Especially compared to "pay $500 for Rosetta Stone" or something.
EW (00:44:24):
Yeah. We don't need to talk about that. I wanna switch topics entirely because you are part of this company that is weird and cool. And I have trouble explaining it 'cause I get lost in AR and furniture...Can you explain what GrokStyle is?
KT (00:44:46):
Yes, totally. So GrokStyle is the company that I currently work for. We do visual search for furniture and home decor and sort of expanding to AI for retail in general. And what our core visual search technology does is allows you to take a picture of a piece of furniture, some chair that you like at your friend's house, and identify what that product is. Either the exact product match if we have it in our database or a visually similar, stylistically similar alternative.
KT (00:45:19):
And from that, you can do a whole bunch of things. "What is this thing? I want to buy it. I want to know exactly what it is." Or we can go beyond that to understand all of the products in designer showroom images, and know what things go together. And then recommend either stylistically similar options or complimentary options. "You want to buy this sofa? Maybe you could also buy this chair and this coffee table and this rug. And these would all actually look nice together and you don't have to worry about not having that stylistic judgment yourself if you don't actually have that.
EW (00:45:56):
And -
CW (00:45:56):
That seems hard.
EW (00:45:56):
It does seem hard. So there's -
KT (00:46:02):
It's just math and data and linear algebra and -
EW (00:46:08):
"It's just math." Okay. So I go to a friend's house. I take a picture and their, I don't know, 15th century throne that I have taken a picture of, it then tries to find a similar throne that can be purchased now at some major retailer. So like it says, "Oh yeah, if you get this at Target, it's really similar."
KT (00:46:34):
[Affirmative].
EW (00:46:34):
And so you have to have a huge database of existing furniture. You're not just like, "I'm taking this picture and then I'm going out to the internet and searching." You have to already know a lot about furniture.
KT (00:46:45):
Right. Yeah. We have our own huge internal database of photos of furniture...millions of products, millions of scenes of ways that people have used this product in the real world. And we have learned this understanding of visual style, some way for anyone that takes a new picture of something, for us to project that into some style embedding and look up what's nearby, what products are similar to this thing.
EW (00:47:22):
What if I take a picture of a Mission Style couch, which is a very specific style. You would be able to say, "Oh yeah, you might want a chair and this style of end table."
KT (00:47:33):
We're working on the recommendations part. For now we have a mobile app where we could take a picture of your Mission Style couch, and we'll find more of those.
EW (00:47:42):
More Mission Style couches.
KT (00:47:43):
[Affirmative].
EW (00:47:43):
For different prices from different places.
KT (00:47:45):
Yeah.
EW (00:47:47):
And how do you identify Mission Style? How do you identify the style of what you're looking at? Is this part of finding terms, search terms?
KT (00:48:01):
We are -
EW (00:48:02):
Tags?
KT (00:48:03):
The core of this is visual understanding. So just from tons and tons of images of couches, of different styles, we'll identify "These are the ones that look closest to this one. And then we can look at the associated metadata to see what the name...of the nearby matches are or what styles might be tagged on those already. But it starts from the visual path.
EW (00:48:30):
We talked about SIFT and how the Eiffel Tower isn't really a good candidate because it has holes.
KT (00:48:38):
[Affirmative].
EW (00:48:38):
And because it has repeats.
KT (00:48:39):
[Affirmative].
EW (00:48:42):
Chairs.
KT (00:48:43):
So in this case, we're just doing deep learning on tons and tons of images and SIFT isn't involved in, SIFT is a feature that a human would say, "I'm going to use SIFT in this pipeline." And I've done some other computer vision stuff with faces where I was like, "We're going to match faces by comparing SIFT features across faces." And after decide, "I'm going to use SIFT, I'm going to look at these regions of the image. I've got to get all my faces lined up first."
KT (00:49:17):
But in this deep learning era, we can say here's a bunch of images of all these things. And I'll tell you how they're similar and how they're different, and the computer can figure out what features and what internal representations are most useful, most discriminative for its purposes.
EW (00:49:35):
Does it have multiple stages? Does it figure out it's a chair before it figures out what kind of chair? And figure out chair versus couch versus table?
KT (00:49:45):
Our system does predict what category something is. So yeah, it'll say, "I'm pretty sure this is a chair." So then it will go look up chairs instead of looking across the entire database of everything that we have.
EW (00:49:59):
Because it would be more computationally optimal to say, "Okay, this is a chair. Now let's go into the chair subcategory and finish looking up, is it a 1916 chair or a postmodern chair?"
KT (00:50:13):
Right. Another thing we can do though is, we can say, "You took a picture of this wicker chair, and we know it's a chair," but if we start looking for tables that are nearby instead, we might find wicker, some other aspect that's stylistically similar, but in a different category. So our learned style embedding does kind of cluster objects, even if there are different categories, but they're still visually stylistically similar. And we'll kind of still put them together.
EW (00:50:46):
I should have asked you what your favorite machine learning language was. Keras, TensorFlow, straight math?
KT (00:51:00):
We're, you know, using several different of these machine learning libraries and rolling our own in certain cases and using Python to strap it all together.
EW (00:51:10):
Alright. Wow. IKEA, tell me about IKEA.
KT (00:51:17):
Okay. So GrokStyle is this visual search service provider. And IKEA is one of our big public clients right now, where they have an augmented reality app called IKEA Place. And within that app, you can access a visual search feature. And that is powered by us.
EW (00:51:37):
And so I go to my friend's house, I see a chair I like, I take a picture of it. I say, "You know what I want, I want this chair in my house." So I go home and I go to IKEA app, and then I say search, and it says, "Your chair is something that has weird letter "O"s." And then -
KT (00:52:00):
Yeah. Yeah.
EW (00:52:01):
It just plops it into my -
KT (00:52:03):
Yeah, so you could be at your friend's house, use the IKEA Place app to search there and say, "I'm going to figure out what this chair is." And I'll be like, "Oh, this is the Paulin chair." This is something else that you might struggle to remember and type in later, especially with all the accents. And you can favorite it in the app from there and then bring it home and then place it into your home and see. "Oh, I like how this fits. I'm going to consider buying this."
EW (00:52:28):
Even though their chair may not be an IKEA chair.
KT (00:52:31):
Right.
EW (00:52:31):
It's going to find whatever similar, because that's what GrokStyle does.
KT (00:52:33):
Yeah. Yeah. If you take a picture of that cool throne that they have, it'll find the closest -
EW (00:52:40):
Closest -
KT (00:52:40):
- Ikea throne item.
EW (00:52:44):
How does it deal with size? I mean, it's just one picture. It's just, there's no 3D. How do I know it isn't a six foot by 10 foot chair as opposed to a normal-sized chair. Is that a future -
KT (00:53:01):
You don't know...it will find, if you take a picture of a chair and there also happens to be miniature versions of that chair, we might still find the little mini one. And Amazon sells tiny little -
CW (00:53:18):
That's funny.
KT (00:53:18):
- dollhouse chairs. We can't tell if you're taking a picture of -
CW (00:53:22):
"This is not what I was thinking."
KT (00:53:22):
- of a dollhouse chair.
EW (00:53:22):
"This doesn't fit in my space at all."
KT (00:53:27):
But once you're in AR, those models are all true scale, true to life. And with the current capabilities of AR, the scale, moving your phone around in your space and looking at what's in your space, that does estimate what size your space is and what the scale of everything is. So that if you put a three foot tall chair or something out there, it will actually be the appropriate size and you can measure things.
EW (00:54:00):
So the AR part is okay, it's just that I can take a picture of a doll chair.
KT (00:54:07):
Yeah.
EW (00:54:08):
Or a giant chair.
KT (00:54:09):
Yeah.
EW (00:54:09):
And it will find the most similar. But it will then be normal size because -
KT (00:54:12):
Right.
EW (00:54:12):
- the AR will show me what size it is.
KT (00:54:15):
Yeah. And I do have a little IKEA chair on my desk. I should do the demo of, take a picture of the dollhouse chair and then place the full-size one in my space.
EW (00:54:26):
Okay. I should ask you more about IKEA, but we're almost out of time. And I wanted one more thing. You started a Santa Cruz PyLadies meetup.
KT (00:54:36):
Yes.
EW (00:54:38):
Why?
KT (00:54:39):
So Santa Cruz is close to Silicon Valley, but not directly in it and -
EW (00:54:47):
Close and yet so far.
KT (00:54:47):
Yeah. And I wanted to meet more developers, more technical people, especially women. I was like, "They must be here in Santa Cruz somewhere, but I don't know where, I don't know where they are. I need this community around me." So I started this PyLadies chapter in Santa Cruz to bring people together and it's worked out really well so far.
EW (00:55:12):
How much does it cost to be the person who organizes all of this? Is...this is expensive?
KT (00:55:19):
It is not terribly expensive. I work out of a coworking space called NextSpace in Santa Cruz. And they have rooms,...conference rooms, and they allow me to host PyLadies for free because it brings people from outside of NextSpace into the space. So that would probably be like the hugest cost. Otherwise it's just like getting space.
EW (00:55:42):
Finding a good space.
KT (00:55:42):
Although you could probably get companies to sponsor it as well. And then on top of that, there's meetup fees for Meetup.com. But I think I can get a grant from the Python Foundation to help pay for those. And they're not that much, it's 40 or 80 bucks a year. And then there's food and snacks, but sort of been figuring that out over the last few months of how much food we need and people, like yourself, bring snacks as well.
KT (00:56:12):
So it's sort of community supported right now. And one of the reasons...that I wanted to have this meetup in the first place was I went to some of the other meetups, there's some JavaScript meetup at a bar and there was a lot of dudes there and I took my two-year-old daughter with me. So there was two of us women, but it was like, I had to bring my own extra female that I had made.
EW (00:56:45):
And so it is limited to women or people who are -
KT (00:56:53):
People who identify as ladies, as PyLadies. I mean...it's open to anyone that would feel comfortable in that space. Although if you are a man, we request that you come as the guest of another person in attendance.
EW (00:57:12):
And do you spend a lot of time organizing it?
KT (00:57:15):
I should probably spend a little bit more time, finding more people to give talks and stuff, but not too much. No.
EW (00:57:24):
So it's not that big of a cost. It's not that big of an effort, but you do get a fair amount out of it.
KT (00:57:31):
Yeah.
EW (00:57:33):
What do you get out of it? I mean, I didn't know there was a VI game, but...
KT (00:57:38):
Yeah. So...10 or so people show up to the meetings and we have them every two weeks. And it alternates between whether it's a project night and people come and work on projects together, or we just talk about all kinds of things together. Or a speaker night where someone presents. And just being in a space with other women and other tech people in the area and seeing what other people are working on and sharing ideas and just getting excited about things. It just brings warm fuzzies to my heart.
EW (00:58:16):
I enjoy it. And I'm glad you started it because it is hard to find a good technical community. And many of our meetups do tend to meet in bars and I'm unlikely to go to a bar to meet people just because -
KT (00:58:33):
Yeah, it's -
EW (00:58:33):
- it's not where I want to talk because I can't hear anything.
KT (00:58:37):
Yeah. It's hard to get into the nitty-gritty technical details sometimes if it's dark and loud and you don't have a computer around. And you don't get to really know what other people are passionate about and what they're excited about and how that can sort of rub off on you and get you really excited about something. But if you're in a sort of more collaborative space or environment... I'd love to have longer PyLadies meetups sometime, like little Dev House style PyLadies.
EW (00:59:14):
Yeah. Saturday morning. Yeah. We could. And I thought it was interesting that one of the presenters then went to go a job interview and was asked a question that was basically from her presentation.
KT (00:59:26):
Yeah.
EW (00:59:27):
And it was funny 'cause it wasn't...because it is every two weeks, or every four weeks, there's a presenter. It's pretty easy to sign up on the presenter list, let me tell you, but it is good practice.
KT (00:59:43):
Yeah.
EW (00:59:45):
I mostly wanted to ask you about it 'cause I want to encourage people who have this idea that it doesn't have to be a lot of effort and sometimes it doesn't work. I mean, there's a decent chance that it may in five years just be you and me looking at each other, going, "Well, maybe this has run its course."
KT (01:00:05):
Yeah. Which is also fine.
EW (01:00:07):
Yeah.
KT (01:00:08):
But for those five years or whatever that it exists, it can be all kinds of great opportunities.
EW (01:00:15):
I'm meeting new people. People who have sent me to other meetups, which were then way too crowded. But yeah, it's neat.
KT (01:00:22):
And there's two women there that run a Python study group in Felton.
EW (01:00:27):
Yeah.
KT (01:00:27):
So...they're on top of, we're just gonna do this thing for ourselves.
EW (01:00:34):
Yeah. So if you're out there thinking, "Gosh, I wish there were other people that I could talk to," whether it's PyLadies or are JavaDev, the space is the hardest part. But if you can find a space, even if it's a coffee shop that has a back room, it might be worth it, it might be worth it to try it. And 40 or 80 bucks. Yeah. That's a lot to try it. But how much do you spend on conferences? This is like a yearlong conference, one hour at a time.
KT (01:01:05):
And those fees are only for Meetup.com.
EW (01:01:09):
Which is kind of the easiest way. I understand.
KT (01:01:11):
Yeah. It has made it very easy. And people have found the Pyladies -
EW (01:01:15):
People search Meetup, yeah.
KT (01:01:15):
- through Meetup.com. So, but...if that was a cost or something, maybe there's more organic ways to advertise and just get people together that you want to share your technical interests with.
EW (01:01:31):
Yeah. I found a writing group on Nextdoor of all places. So it's all kinds of stuff.
KT (01:01:37):
Yeah.
EW (01:01:39):
Alright. We have kept you for quite a while given -
KT (01:01:43):
Oh, we have so much more that we could talk about.
EW (01:01:44):
We do. We totally do. Which just means that you can come back.
KT (01:01:47):
Okay.
EW (01:01:47):
And since you're local, you can come back.
KT (01:01:50):
Yeah, that'll be easy.
EW (01:01:53):
Chris, do you have any...?
CW (01:01:54):
I was wondering if you had advice for people who want to get into this whole space, either if they're in college, or hobbyists, or people who are professionals who want to change to something, I mean, what's the right path to start learning about this whole space? 'Cause it seems like a lot of different things.
KT (01:02:16):
Hm. Which, yeah. Which part of the space, the computer vision part, the building interactive systems that people can play with part, the game design part -
CW (01:02:26):
I guess the computer vision part. Yeah.
KT (01:02:27):
Because it's a popular thing right now, there are a lot of tools coming out, including tools for making your own models and using them. So I think TensorFlow is being ported to JavaScript, and trying to make it as easy as possible for people that might be in a web programming language to get access to these tools and then build things that are running in other people's browsers. So they're the easiest possible thing to share.
KT (01:03:02):
I think personally going that route where you are using JavaScript-type things, where you can make something small and share it with your friends and your friends will be like, "Wow, that's so cool." That will just give you a ton of encouragement to keep going. And then I think with JavaScript, you can look around and see how other people are doing this. 'Cause you can maybe get access to the code a little bit more easily. So I'm doing it in a social kind of way.
CW (01:03:39):
Hmm. Okay. It's good too.
EW (01:03:39):
Yeah, I mean, there's a lot of good social benefits to being able to share.
KT (01:03:47):
Especially if you're just getting started and trying to figure it out.
CW (01:03:49):
Yeah.
EW (01:03:52):
Cool. What about getting started in games with a purpose?
KT (01:03:56):
This morning there were tweets from this human computation conference called HCOMP, which is happening in Zurich right now. And I think there's a keynote from the people doing Zooniverse, which is a platform for all these different citizen science projects. And some of them may not be game-flavored at all, but they're probably game-flavored ones or ones that could be more engaging, if they were sort of more game-like and helping ramp people up and learn things.
EW (01:04:40):
Zooniverse is the citizen science place that does Galaxy Zoo -
KT (01:04:44):
[Affirmative].
EW (01:04:44):
- where you can identify different galaxies or different features in pictures.
KT (01:04:51):
Yeah. And they have a bunch of other projects too, like looking at pictures that camera traps have, not a trap, but cameras that are out in the wild where animals will walk by and a motion sensor will trigger,...and the camera will take a picture and then citizen science people have to go and tag those that say, "There's actually an animal here. It's Fox, it's a bunny, it's a deer, it's a elephant."
KT (01:05:16):
And so I think there's lots of these that are out there, ones that you can go find and participate in. And then I like Zooniverse as a platform for making more of those. So if you have an interest in kind of working on the building of those tools, buildings of those projects, I'm sure there's space for that as well. Whatever your passion is or even getting involved with the existing ones.
EW (01:05:45):
Do you have any thoughts you'd like to leave us well?
KT (01:05:48):
Last brief thought on augmented reality, visual search is going to be a big part of that, understanding what your environment has in it already. So you can do more meaningful, more intelligent augmented reality.
EW (01:06:07):
Our guest has been Kathleen Tuite, computer vision expert and software engineer at GrokStyle. If you'd like to join us at PyLadies in Santa Cruz, there'll be a link to the meetup in the show notes. And if you're not local to Santa Cruz, there are lots of PyLadies and lots of meetups. Check around. It's worth it. Thank you for being with us, Kathleen.
KT (01:06:26):
Thank you for having me.
EW (01:06:28):
Thank you to Vicki Tuite for introducing me to Kathleen and for producing her. Thank you to Christopher for producing and co-hosting this show, and thank you for listening. You can always contact us at show@embedded.fm or hit the contact link on embedded.fm. Thank you to Exploding Lemur for his help with questions this week. If you'd like to find out guests and ask early questions, support us on Patreon.
EW (01:06:52):
Now, a quote to leave you with from Douglas Engelbart: "In 20 or 30 years, you'll be able to hold in your hand as much computing knowledge as exists now in the whole city, or even the whole world." I don't know when he said that, but I bet it's still true.
EW (01:07:12):
Embedded is an independently-produced radio show that focuses on the many aspects of engineering. It is a production of Logical Elegance, an embedded software consulting company in California. If there are advertisements in the show, we did not put them there, and do not receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.