Christmas 2015, my friends and I were doing a small-budget secret Santa and I had my friend Rebecca. I’d also just recently found a Makerspace near my hometown and was itching to use their 3D printers.
Wanting an excuse to 3D print something, needing a nice gift for my friend and having (what I thought was) a hilarious idea, I started researching how to make what I later dubbed ‘Mt.Usmore‘, a 3D printed miniature model of myself and two friends in a Mt.Rushmore inspired setting.
This project was undertaken over a few weeks from conception to completion and used a mix of Photogrammetry and 3D laser scanning to capture the models, a small amount of 3D sculpting and 3D printing.
Photogrammetry is the use of photography to capture an object from multiple angles and obtain measurements, usually to create a 3D model of the object. I had read about this in a Tested article in which Norman Chan gets himself made into a small colour model. The article demonstrates many different software options, hardware options and suggestions for hardware and techniques.
I decided to go with the free Autodesk 123D Catch software since I couldn’t justify buying the other software just for this one joke-y gift and I had the time to spare waiting for it to render. Also, since my friend Hannah is a keen photographer with access to a small photography studio at her college, I thought I’d ask her to take the photos I’d use to make the 3D models.
Capturing the first models
The basic setup we had going was one of us (the subject) sitting on a stool in the middle of a small room while a photographer with a bridge camera took pictures from as many angles as possible in a spiral pattern, always remaining roughly the same distance from the subject. Two spirals were taken, one broad spiral to capture the majority of the subject, then one smaller spiral focused on capturing the top of the subject’s head, a very problematic area. These were taken in quick succession, almost treating both spirals as one pass. Glasses and other accessories were removed before starting.
I would then take each set of photos (a ‘set’ being photos taken of one subject in two spirals) and upload them into the cloud software. Rendering in 123D takes a while and that’s not something you can control yourself by getting a faster machine. The software also has occasional trouble in mapping some photos. Luckily, a lot of the time the photos either side (relative to the spiral of photos) can catch a lot of the detail from the missing picture. When that wasn’t the case, I mapped the photos myself by choosing several points in the unmapped photo and then finding the corresponding points in other, already mapped, photos.
To make this easier, if I were to attempt this again, it would be a good idea to intentionally have some part of the image with sharp corners or some precise point that’s very easy to spot. One of my subjects, for example, wore a shirt with a ‘The North Face’ logo on it. The corners of the letters gave great reference points. Since I wasn’t planning to print in colour, just placing small coloured dots on the subject would work too.
The first models
The first models captured via photogrammetry and rendered during the photogrammetry session were hit and miss. The unmapped photos I mentioned were only mapped later and we were still working out the proper photography methods. We learned and improved as we went and achieved some pretty good results!
Aside from the model of myself which was fraught with irregularities and holes which couldn’t be fixed simply to give an easily recognisable model. The models of Sam and Kyle, however, had only minor issues and their recognisable features were fully intact, meaning I could patch holes and perform other fixes without much trouble. The model of myself would have to be redone using other means and I’d heard of another process which would be perfect.
3D laser scanning
3D Scanning isn’t just a sci-fi thing now, you can get a colour model of yourself from ASDA (for the low, low price of £85)! Desktop 3D scanners and similar technologies are becoming increasingly available to consumers but many don’t realise that they’ve had a 3D scanner in their home for years.
In order to capture the 3D model of myself, I used an Xbox Kinect! The Kinect is great because, using Skanect on my Macbook, the software renders the model in real-time, allowing us to correct and patch holes as we go. The Kinect is relatively low-res but it scans in colour and we didn’t need much fine detail in this project.
Finding the Kinect was a little trickier than expected since most of the newer models feature a proprietary connector. Microsoft sell an adapter for it but they’re around £30 each and I didn’t have time to wait for delivery. I asked around and luckily found an older model with a USB connector. However, the older Kinects aren’t compatible with Microsoft’s 3D Builder, hence me having to use the third-party program Skanect. Skanect is a paid program but you can use it for free if you only export in a very low resolution.
Capturing the model was relatively similar to the previous method, the main differences being that we were now scanning in a bedroom (Christmas was approaching fast) and that, since the Kinect must be wired to a laptop, the subject was placed on a rotating surface (in our case, a stool) and the Kinect was only moved vertically.
The subject would sit as still as possible in the stool and the Kinect, while tethered to the laptop, was moved vertically in an arc to capture from below the chin to the top of the head. The stool was then slowly rotated through 720 degrees, with the latter 360 being slightly faster to ensure that all relevant detail had been captured.
The result was pretty fantastic, even better quality than any of the photogrammetry models, but since I was using the free version, I could only export in a very low resolution. This meant that the models of Sam and Kyle were actually more accurate via photogrammetry. Luckily, I was able to improve the model drastically in Meshmixer.
First, I began using Meshmixer to fill any holes in the model. Both methods of capture I used produced hollow objects with some holes, making this fix vital. Next I used the smooth sculpt tool to smooth out any bumps or inconsistencies in the photogrammetry models. Setting the tool to very light and quite broad worked fantastically for this.
The next job was to smooth out the laser scanned model. The model was made of relatively few triangles which was very apparent, especially compared to the other smooth models. Since the model of me didn’t need a lot of fine detail to look much like me, it worked extremely well to run another light brush over the whole model, instantly appearing to up the resolution. That was it for the small tweaks, now to set the scene!
Completing the model
The next step was to import all three busts into one project and actually compile them into Mt.Usmore. This was achieved my first creating a few large cuboids to make the mountainside into which the busts would be set. One as a base, one as the cliffside and one or two other shapes dotted about to make it look like we were really set in stone, not just placed upon it.
This was great for creating the rough outline but getting the fine detail would be the most critical part and the most fun. To get the final shape of the rock, I used the various sculpting tools within Meshmixer to push and pull at all the shapes, giving them a more natural, imperfect, look and smoothing the edges of the busts to really make them one with the shapes. After a lot of this, the model was finished except for the final touch.
The title, engraved into the front of the model, was achieved with Microsoft’s 3D Builder program. A free app, it’s generally not that great. However, it has a very simple and easy to use engraving feature which gives the user control over font, font size and depth of extrusion. All of these had to be carefully controlled to give a pleasing aesthetic while remaining large and deep enough to withstand the 3D printing process.
After that, that was it, the model was finished! Now to put it on a USB stick, take it to the Makerspace and print it out. Due to the estimated time given by Cura, the software controlling the Lulzbot I used, I had no choice but to print on high speed mode (which still took 6 hours!). However, I’m still very happy with the results of my first ever 3D print!