Photogrammetry has garnered much publicity recently as a means of capturing 3D data about heritage assets and here at the Digital Building Heritage Group we’ve been using it professionally for a number of years to do this. For much of this time it has been a highly specialised process but recent advances in technology such as tiny, very high resolution cameras in modern mobile phones and the availability of reliable and free-to-use software like Autodesk’s 123D CATCH has brought the technique well within the reach of everyone who has a modern mobile phone and an internet connection. This is the theory at least but from experience with our architecture students studying digital reconstruction of historic buildings we’ve seen that in reality the process can be more complex than this, requiring considerable care and planning to achieve high quality, exhibition ready results that have real usefulness. We routinely use a high quality 35mm digital SLR and the industry standard Agisoft PhotoScan for creating high quality 3D point clouds, Triangulated Irregular Network (TIN) meshes and Digital Surface Models (DTM’s) for our digital models and 3D prints and this an excellent software package for doing so. It does come with price tag which can place it beyond the amateur or student user’s reach so we thought we’d look at how easy it was for this sector of the user community to engage with photogrammetry and the kind of results they could hope to achieve. For this we turned to the most commonly available form of cameras – mobile phones – and Autodesk’s 123D CATCH, a free-to-use photogrammetry package. Autodesk’s blurb says that 123D CATCH “….captures places, people and things in 3D using your Windows Phone or Mobile device, iPhone, iPad, Android device, or any camera. Share your catches, or 3D print a real object!” We thought we’d test the voracity of this claim by using an ordinary iPhone and a visit to cathedral and see what could be achieved. A day out at Lincoln Cathedral provided a typical user opportunity to grab some tourist snaps of the architecture using an iPhone 6S. In the north aisle it also provided the opportunity to walk round a stone copy of a medieval Christs head sculpture which was on a table display and take 21 photographs of it over a period of about 3 minutes in a roughly structured double circuit of the object, the first circuit at a low level, the second circuit at a higher level. The resulting photos, taken with the autofocus and auto lighting compensation activated in the phone’s software but without flash were all reasonable quality and good resolution, more as a result of the 12 megapixel camera within the phone than any great skill on the user’s part. Photogrammetry is the science of making measurements from photographs, especially for recovering the exact positions of surface points and is a technique which is enjoying increasing popularity in the arts and creative design sector for the digital capture of 3D objects and scenes. These then are most often manipulated as 3D models for use in architecture, product design and engineering, performance art, fine art and conservation. The underlying method of creating a 3D mesh of surface points is actually called stereo photogrammetry. It involves estimating the three-dimensional coordinates of points on an object by making measurements in two or more photos of the same thing but taken from different positions. Common points are identified on each image. Methods related to triangulation, trilateration and multidimensional scaling are used to calculate the relative x,y,z positions of the camera and the points on the object. These can then be aggregated to reconstruct the 3D form of the object using the points identified on its surface. There are a number of good pieces of photogrammetry software available which do this and a great deal more, including capturing bit-maps of the object’s surface and “wrapping” them as textures onto the 3D mesh to give the model a lifelike appearance. Some programs like Acute3D’s Smart3DCapture, now part of Bentley Systems and renamed ContextCapture, Pix4Dmapper, Photoscan, 123D Catch, Bundler toolkit, PIXDIM, and Photosketch have been made to allow people to quickly make 3D models using this photogrammetry method. Irrespective of which is used, it’s likely that there will be gaps in the resulting mesh where data was not available or could not be calculated from the photos so additional work on the mesh with software like MeshLab, netfabb or MeshMixer is often still necessary to create whole, “water-tight” models. We use NetFabb Pro (http://www.netfabb.com/ ), and occasionally Magics (http://software.materialise.com/magics ) for cleaning up the the .obj files which Autodesk’s 123D CATCH produces. Once they have been cleaned up (“fixed”) they are exported as STL files for 3D printing.
Back at the Office the 21 pictures taken of the Christs Head at Lincoln Cathedral were uploaded to Autodesk’s 123D CATCH software (this software is free to download here – http://www.123dapp.com/catch ) just as one might do at home or even by Wi-Fi from your phone (there’s an app of course for this). The processing of the images once they were uploaded was straightforward and took about 4 minutes. The resulting OBJ file was pretty good, with what we’d describe as a medium resolution mesh density. It had only a few extraneous bits of mesh to be cut away and of course a hole underneath where the object’s underside had been sitting on a table and so could not be photographed. We used NetFabb Pro to fix this OBJ file by cutting away unwanted mesh of the table etc. and filling any holes and scaling it to the size of the original. We then used a hollowing function in Netfabb Pro (not available in the free version) to void the inside of the object and give it a 4mm wall thickness, and cut an access aperture in the back of the model where it would not be seen so allowing the interior of the hollowed out model to be inspected. To create a 3D print it’s not strictly necessary to do this but it greatly reduces the amount of material used and so reduces the cost. Normally we would print this ourselves but we wanted to be true to the experiment and recognised that most people do not have a 3D printer at home (yet), so we sent the finished model as an STL to our friends at John E Wright in Nottingham (http://www.johnewright.com/ ) who have a public 3D printing bureau service using Makerbot Replicator Z18’s (http://www.makerbot.com/ ). Anyone can use this commercial service. A couple of days later the 3D printed model arrived at our office. The resolution of the model reflects the relatively quick capture method, the medium level of TIN resolution automatically applied by 123D CATCH and the 0.5mm resolution of the Makerbot 3D printer. This is a cheap, quick model, using widely and publically available resources. Because of this we think the workflow we tried here makes a useful teaching demonstration of Photogrammetry 3D capture and 3D printing of artefacts and this is what we’re using it for. The resulting 3D print is interesting and so are the questions it raises about what you would do with this capability at this level of resolution beyond instruction.