!!Con West 2020 - Kathleen Tuite: The hacks behind my 3D reconstructed wedding cake topper!
Mar 20, 2020 18:56 · 1534 words · 8 minute read
Okay. Thank you very much. I’m Kathleen. My pronouns are she/her. And I’m gonna tell you a little story. It’s part love story. I met my partner here at UCSC, right in the Baskin Engineering Area in a computer graphics class. And one of the first lab activities was like bringing in an object from home. And 3D scanning it with a structured light sensor, and I brought this garden gnome that my roommate had, that we called No-Pants Dan, because his legs were flesh-colored, but it’s also a story of overengineering my way into a problem and then overengineering my way out of it again. And it’s also cameras and imaging sensors and things that make 3D models of things – are super cool, and they’ve gotten a lot better over the last many years. So we’re gonna learn all about that. So a little bit about me. I got my PhD in computer science from the University of Washington.
01:27 - Doing collaborative photography and crowd sourcing 3D reconstructions through a game I made called Photo City. And there’s some other 3D reconstruction stuff in there. So this is my thing. Making 3D models of things, especially buildings and large spaces. So when I was getting married, I guess a thing you’re supposed to do is have a cake and put a little thingy on top that represents you as a couple. So my partner and I were trying to figure out what that should be.
01:54 - Maybe we should make a 3D model of ourselves! And the imaging techniques that I was using for my stuff were all like… Image-based. We needed a depth sensor, or something that would work on humans. Conveniently, there’s a guy, Richard Newcome, who was doing a postdoc at my lab at UW. Here he is, giving a demo of his Kinect Fusion technology. The way Kinect Fusion works is you have a Kinect with a structured light sensor and you wave it around and it fuses all of these different depth fields together that are seen from different angles into a single 3D model. So we’re like…
02:29 - Yeah! We’ll do that! But unfortunately, Richard was out of town during the few weeks we had this idea to do this for our wedding, so we search online and found a company called Shapify, where you could go to a fancy booth somewhere in Europe and get scanned and they would print a model, or you could do – like, use their software and do some DIY Kinect scanning, where you set up your Kinect and rotate in front of it yourself. And then you could order a 3D print from this company. Okay. This is what we’ll do. So back to the computer graphics lab at UW now. We snuck in, in the middle of the night, with our wedding clothes on. When none of my friends were there. And here’s a test run. Before we put on our fancy clothes. Of trying out the shape of our software. Here’s a Vine of a test run of the two of us, where we had to stand very still and move in eight different directions. Trying not to change our pose. But it made a model.
03:33 - And we ordered a color 3D sandstone print from the company, Shapify, and it came back like… Aw. Isn’t that cool? Pretty cool. But… I mean… Well… This isn’t gonna get any better. But the lighting in the lab was really crappy. Fluorescent school lighting. And the colors on the model were kind of green and dark and sickly. I was like… This is just not aesthetic enough for my wedding. I don’t like this. How do I get this data so I can reprint it? And I go to the website for Shapify and there’s no way to get the model out! But I could see this 3D interactive viewer that was on the website at the time.
04:15 - I could see the data! The triangles are in there! I know that they’re there! Maybe I could have emailed them and asked them. But… I was too shy. I just… I wrote a program instead. And conveniently, I’ve made some 3D viewers back in my day, since the undergrad class here. I made a Java-based point cloud interactive tracing thing, I made a streaming point cloud viewer based in Flash. I was like… Maybe I can figure this out! So that’s what I tried. This is what I did. Shapify website. I inspected the network traffic. I saw, like, a big blob of 3D data coming in. But it was encoded. And I was like…
05:03 - I can’t read this! I downloaded a copy of the whole website, including the local JavaScript file to poke at and get it to decode the model for me. Somehow successfully it did that, and I was able to print out the triangles. And format them in a nice Ply file of vertices and edges and triangles. And get the data! And as a note, when I was making this talk, now they have server side rendering, so you drag a little box around, and it makes the picture on their side. So you can’t get the triangles out as easily! I don’t know. So okay. We got the 3D model up. Yes. Here it is. Great. Now I needed to print it.
05:45 - And at this point, there was like two weeks, maybe less, before our wedding. We couldn’t go to Shapeways or something. More pictures of the model. So we had to go to our local friendly maker space in Seattle, Metrix, which is now closed, and we asked them for help. And one of the things about the model was that the Kinect, the way that it worked, my husband’s shiny dress shoes messed up his foot. So a person at Metrix painstakingly reconstructed the foot for us, so that was really nice of her. So we started it printing. We went in there probably 11 o clock at night. It was taking a while. We’re like… We’ll come back tomorrow.
06:30 - We came back the next day and saw our model in the little display case. Is this like a ceramics painting place where it’s on display and you can print it up and take it home? They’re like… No, we printed a second copy of you to keep for ourselves. It’s kind of like that Robin Williams movie where he keeps photos of a family when he’s developing the film. Also, looking at their Flickr stream… Like… Oh, great. They took lots of pictures of the 3D of us. So that was kind of nice to see. As I was making this talk. We got it home.
07:07 - Here’s the colored sandstone, next to the white plastic. A couple different views of that. We made it a little bit bigger. Kind of details in the back. We put it on our cake. It all worked out. And also, as I was making this talk, I went back to the website. Now you can download the data for free. Since I already bought it. So I was able to just get it and look at the color model, which I hadn’t seen before. So that was convenient. Yeah. You know. And in the last minute or so, I just want to talk about… It’s 2020! This actually was done in 2014. When we had a Kinect.
07:49 - What would you do now? Well, now there’s like a tiny little Kinect in my iPhone these days. Like, literally the structured light sensor that the Kinect has, there’s something like that in your phone. There’s a little dot projector. It’s how the faceID works. And I didn’t even put this together. Oh, that’s where the depth camera is. It’s on the front of my phone. So I found some iOS app called Capture, and I was using it before this talk. And this sure looks a lot like the visualization of Kinect Fusion to me. And makes a nice little point cloud. So I think if I were to redo this, in this day and age, I wouldn’t have to sneak into my lab.
08:32 - I could do a lot of this in the comfort of my own home. I could find a place with better lighting. I was very excited to make some models of my cat here. And oh, we had to find her when she was sleeping. So she wasn’t like… Ooh, you’re waving a phone around! So with that, here are three of the tools that I used. MeshLab, to look at 3D models of everything, Shapify, and this Capture scan. And I want to point out that this whole topic, this project, grew out of my undergraduate education here at UCSC a long time ago, with amazing TAs and graduate students. At the time, they were getting paid enough to live here. Now they’re not, so that’s why they’re on strike. And I urge you to support the grad strike here. So thank you. .