Field Test in Photogrammetry

This is a summary of my field test in photogrammetry. For the full project details click here: mno613-field-test-white-paper

Photo=light, Gram=drawing, metry=measurement.

In a Neiman Lab article published December 2016, photogrammetry is one of several cited emerging technologies that are expected to really transition in the new year from a passive video experimentation to a full immersive experience. Newsrooms across the country will be able to fully implement the new areas of photogrammetry, ambisonics and stereoscopic rendering. Given how easy it is—using old technology to make new—it makes sense that such emerging technology will become established technology as soon as the turn of the New Year. Photogrammetry has been around for sometime, most recently for constructing maps and topographic landscapes, it’s only with the use of three-dimensional technology that photogrammetry has earned a bigger place within the media landscape. The recent evacuations from the Syrian town of Aleppo could be told in a more immersive way and perhaps move a larger population of readers to a call for action.

We learned about many new and emerging types of technology in class, and while I wasn’t necessarily ignorant about them I had never delved into the technology until this class. Learning about virtual reality, augmented reality, 360 and 3D video, photogrammetry, sensors and drones were quite eye opening for me and my background in television sports and journalism. What most impressed me was the speed by which these technologies were becoming more common and easy to use.


My hypothesis for this project is to use two of the emerging technologies we covered and demonstrate a more immersive storytelling experience. I chose to use photogrammetry to capture a museum exhibit and model it in 3D with annotations to tell a more immersive story of the subject. For this project I decided to cast a wide net and use a popular exhibit at the National Constitution Center in Philadelphia called Signer’s Hall. This exhibit consists of 42 life-sized statues of the founding fathers that signed the Constitution. I will use photogrammetry to capture this exhibit and make it accessible to more people regardless of where they live or their socio-economic level, and I will do so using equipment common to most people: a smartphone (iPhone 6) and desktop computer and free educational access to Autodesk Remake and Sketchfab software programs.


The statues that comprise this particular exhibit are life-size bronze statues so I knew there would be some shading adjustments I would have to make. Another realization was that most of these statues were the same height as me, 5-foot, 6-inches tall and I did not bring or request a stepladder to get shots from above the statues. I began taking test shots of a group of three statues to see how the overlapping between the

Charles Pinkney, Charles Cotesworth Pinckney and John Rutledge. Photo taken at The National Constitution Center, Philadelphia.

three would translate when I brought them into the AutoDesk Remake software and then how difficult it would be to clean up the models in Sketchfab. This became a little challenging as not only was I taking a lot of pictures, it also required me to crawl around on the floor and contort myself around the limbs of these three statues that were posed as if engaged in a debate. The key I learned from several tutorials on the Autodesk YouTube channel is that for a successful detailed model, pictures must not only be in focus and evenly lit, but there must be 40% of overlap between all the pictures to allow the point cloud to be accurate. Additionally, depending on how much detail you want to capture, photos should be taking five degrees apart as you shoot around the object, above and below. This resulted in over 200 photographs taken of the first test run of pictures. Then based on the arrangement of the statues within the exhibit, I decided to focus on two statues that stood alone, William Blount and our current celebrity, Alexander Hamilton. Since I had access as long as I needed to the exhibit, I decided to tackle the Benjamin Franklin group. This consisted of a group of five statues surrounding a table at which Franklin was seated. This was the most challenging group of statues to photograph properly so I focused on Franklin (seated) and Gov. Morris (leaning over Franklin) but the primary focus was on Franklin.

Example of the raw photos taken at varying lengths and at 5-degree intervals. Photos taken at The national Constitution Center in Philadelphia

Once the photos were transferred and uploaded to Autodesk ReMake, it was pretty easy to construct the 3D model and process the information. I credit using the ReMake program as opposed to using the 123D Catch app for the ease in transfer and construction. Next, I saved the 3D model and then imported it into SketchFab, which was a challenge only because I needed to somehow get more space than my free educational account provided. After getting the necessary space to upload all my models, it took a couple of tutorials to figure out how to orient, light and shade my models. I still have a lot to learn but for the time period given for this project, the result came out pretty good.

Screen shot of initial upload of the Alexander Hamilton (center screen) photos. The exhibit room is partially reconstructed even thought photos were primarily of Hamilton.


To determine the feasibility of using 3D technology to tell stories, I constructed my virtual Alexander Hamilton complete with annotations and shared it on my Facebook page asking for anyone to share their impressions. I wanted my target audience to be a mix of people in the journalism industry and everyday people so I decided to identify a cross section of my Facebook friends that were professional television journalists, cameramen, photographers, regular everyday people and a couple of librarians. The last demographic was chosen because of the historical nature of my project and the fact that librarians have been tuned in to the digital age since the debut of electronic readers. The overall reaction in general was how cool the technology was and that it was something that could be done with still pictures. Nearly all respondents felt immersed in learning about Alexander Hamilton and also felt the annotations brought another level of immersive-ness because not only were they able to see what the annotation was explaining, but it could viewed from different angles.


This technology is really effective when it comes to documenting and telling historical accounts. It’s a much more immersive way to teach which is why we see more and more virtual and 3D storytelling coming from the likes of National Geographic and Smithsonian as evidenced in their digital magazines. For my intentions, this use of photogrammetry and 3D technology was effective. I think with more time to develop my skills in cleaning my models and building a virtual scene for the subjects to live in, using these two technologies would exceed my expectations. Being a video person, I would love to go into videogrammetry.


Improvements to communications infrastructure and Internet speeds would bring the use of photogrammetry to news organizations on a more mainstream level. With the capabilities of so many mobile devices and applications that allow the use of technology such as photogrammetry, the question becomes how fast can the processing power of these devices become standard to where anyone with a smartphone can construct a 3D scene such as I did with my iPhone 6 with minimal transferring or data issues.


Technology like photogrammetry and 3D modeling will definitely become the norm when it comes to storytelling for journalists. We have already crossed the threshold with the New York Times and BBC News implementing story coverage in that format. As mentioned before, National Geographic and Smithsonian and National Geographic Travel have already become go-to sources for immersive storytelling via their digital magazines. The challenge becomes whether more news organizations become aware of the capabilities or of the availability of this kind of technology or if they are, whether they can find storytellers that are able to use the software effectively. Besides news, photogrammetry and 3D technologies will become a tool in preserving the historical artifacts of things like the Seven Wonders of the World or save monuments or historical buildings from the hands of extremism.

As of 2018, software improvements combined with more drone accessibility has brought photogrammetry front and center in helping with agriculture, mining, construction and inspections. The most notable use is by the New York Time VR team in the recent natural disasters with volcanic eruption in Guatemala and Hawaii. In the gaming world, high quality scanned assets contributed to the first immersive first-person interactive story released by none other than Unity. In regards to historical preservation, we now see photogrammetry used to freeze a time capsule of culture by including street clutter such as fire hydrants, bollards, and road signs.

The quality alone in photogrammetry software has improved enormously and can only foreshadow what another two years can produce.


Summers, N. June 6, 2018. “Inventory” Preserves Street Clutter With Photogrammetry. Retrieved from

Palladino, T. June 21, 2018. New York Times AR Coverage of Guatemala Volcano Disaster Shows AR Isn’t Ready for Breaking News. Retrieved from

Walford, A. 2007. Photogrammetry. “What is Photogrammetry?” Retrieved from

Soto, R. December 13, 2016. Neiman Lab Predictions for Journalism 2017. “VR Moves from Experiments to Immersion.” Retrieved from

Caughill, P. December 22, 2016. Futurism. “This New Drone is Powerful Enough to Carry You and a Friend.” Retrieved from

Krewson Wertz, Pamela. September 19, 2016. “Digital Photography: The future of small-scale manufacturing?” Retrieved from

2015, June 18. Sketchfab Tutorial. How to Set Up A Successful Photogrammetry Project. Retrieved from



News in the Age of New Media

The last several months have been quite an education. As a jaded, cynical member of the television media, I have come to accept the crazy new world of not only social media as a news source but the newfangled technologies that are allowing social to be a news source. I attribute this acceptance to the graduate school program I am closing out (hallelujah) and the latest course in emerging technologies.

One of the many new technologies that are disrupting news gathering is 360 video and virtual reality. Both provide a more immersive experience to news stories and current events. 360 video in particular will have a bigger impact, as it is easily accessible to anyone with a smart phone and Facebook account. This “surround” video allows the viewer to move around in a real 2D environment that can feel three-dimensional. Add some surround sound and you’ve got quite the presentation of an event. Imagine 360 coverage of a political rally with the accompanying sound. It would be real-time documentation without any bias other than what the viewer brings to the story.

Virtual reality (VR) on the other hand is not just from video gamers anymore and is already becoming the hot new way to cover certain news worthy events. Unlike 360 video, VR requires software to construct the virtual environment on top of the extra skill in capturing the real-life subject matter. This would be an easy adaptation for large and long-established news organizations like The New York Times but would not be practical for smaller news outlets. Additionally, VR is a medium by which careful consideration should be made regarding which news events are made in virtual reality. Considerations such as the effects of war or crime events on a viewers mental capacity or physical health. Instead, VR would have a wholly different effect when used for educational purposes. Whether to help in the treatment of phobias like a fear of heights or to bring historical characters or events to life as the Smithsonian is working toward.

For the future, 360 video is much better suited to news gathering and VR to a more controlled educational or game environment. After all, even with video games there is the understanding that the viewer or gamer is entering a false environment.


Road Closures by Drone

Unmanned Aerial Vehicles, or drones, have been making the news off an on for their use in journalism and in videography, especially in movies. Drones have the ability to capture a landscape from a wide variety of heights and are much more versatile than a jib or crane camera. Personally, I have always thought of drones in this respect, getting that money shot from low to high of a scene from the sky. They are always so beautiful and often times breath taking. An area that I don’t often think of the use of drones is in conducting journalism.

I came across a perfect example where the use of drone footage would enhance a story. It’s about the closure of a major Philadelphia road artery for much needed repairs. Currently, the news article used a capture from Google maps to show the length of the road set to be closed. Instead, drone video would not only show how busy this road is but also show the level of repairs it has needed for sometime. Actual video would generate more views to the news story as well since it would be a unique look at this stretch of road.

From a regulation standpoint, using a drone for journalism would count as commercial use and would require passing the Part 107 exam administered by the Federal Aviation Administration. However, were one to use a drone as a recreational tool to check out Lincoln Drive and how the repairs are going, they would have to keep the drone within eyesight and below 400 feet.

The potential viewers that drone video would bring could be worth the extra effort in getting certified by the FAA to use drones for journalism and thus, commercial use.

Field Testing in Philadelphia

My big final project for one of my graduate classes is to conduct a field test using an emerging technology to tell a story. There is a lot of emerging technology out there and for some, I do not see an effective purpose in accurately telling a story— but that is why we go to school, to learn. I have since changed my mind about the value of virtual reality, 360 video, voice-activated artificial intelligence (Siri and Alexa), drones and even streaming video like Facebook live.

I decided to conduct my field test using virtual reality to share the story of Philadelphia, specifically the National Constitution Center where visitors can walk among the founders of our country. Philadelphia is chock full of historical landmarks, museums and founding history and some of it goes unnoticed because there is so many hidden gems. I chose the Signer’s Hall where visitors can sign the Constitution along with the 42 founding fathers present at the original signing on September 17, 1787. Signer’s Hall is one of the most popular exhibits of the National Constitution Center and would not only serve in telling the story of each founding father but would also serve as an interactive way of promoting the Center across the country.

Signer’s Hall invites you to sign the Constitution alongside 42 life-size, bronze statues of the Founding Fathers.

Accomplishing this will be a challenge and I fully expect several issues in scanning each statue and building my virtual environment since I will be using free versions of Sketchfab, Unity and Autodesk. Another challenge will be planning the time that it will take to conduct the scans needed and then the time it will take to build the VR components. All challenges that are worth tackling to bring something historical to life.


Storytelling Sensors

A lot of complicated or complex issues or ideas can be made into easy to understand stories through the use of visualizations or by collecting data. One such complex or abstract issue is the ongoing drought conditions in the southwest and southeast parts of the United States. To illustrate the levels of drought conditions, the SparkFun Soil Moisture Sensor can be used to gather data on those drought conditions in the southwest. The soil moisture sensor would be plugged into an Arduino and would light up if moisture was present. A sensor array could then be used to distinguish the amount of moisture on a scale of 1-5 by using the SparkFun LED 8×7 array. By coding, the level of moisture concentration can be indicated by each pin and show differences in drought areas.

Virtual Concessions

Now that this long, presidential campaign full of inappropriate comments, accusations and threats is over, I started thinking about the swift about-face that both “establishment” party politicians took. The calm platitudes from the former reality TV star turned President-Elect and then the tasteful, call to unite from the former Secretary of State got me thinking how different it must be for journalists covering the candidates, they would see these two people “off the air” while traveling, while interacting with staff and voters.

What if 360 cameras were taken on the airplanes of the presidential candidates to show what goes on while in transit? Viewers could see how reporters cover a campaign and how candidates interact with those reporters. This could be the new way of getting to know a candidate running for office, not just the edited and prepared candidate that we get now.

An opportunity for a virtual reality story could be following the election, when the President-Elect meets with the President to talk about the role. Imagine being able to virtually be present in the Oval Office as the two men, address the press and answer questions about how their meeting went. Another one, virtual reality of the political rallies each candidate has in every state during the campaign. What better way to show the true climate of a rally or see how many people are in attendance or what the energy felt like at these rallies.

One thing is for sure, it may help show the true climate of an election and be a more accurate predictor than traditional polling or focus groups.

Reality Capture: The New Camera Phone

Reality capture technology is has come quite a long way from what we know from movies like Avatar, Lord of the Rings and King Kong.

Avatar (2009) Image:
Film Title: King Kong.
King King (2005) Image:

Nowadays there are 3D capture applications that are available for your smartphone, that allow anyone to capture an object in 3D. There even more apps that are now available for download that will take that 3D file and animate it. It’s amazing times when it comes to technology.

Often we cheer the innovation of such technology and how cutting edge or beneficial it is for sharing information, telling stories or providing a unique experience. What about the long term ramifications? When it comes to gaming,

Kit Harrington from HBO’s Games of Thrones is featured in latest video game, Call of Duty: Infinite Warfare (Release date November 4, 2016).

3D and virtual reality is the name of the game. But what about everyday life? What about allowing anyone the ability to capture video for 3D. The question becomes about privacy and the ownership of a person’s likeness. Much like when cameras started appearing on cellphones, the issue of a person being photographed without their knowledge became an ethical discussion. Now, with easy access to 3D and virtual reality apps and software the same concern is appearing again. What if someone mistakenly makes a 3D capture of another person available publicly? What happens to that person’s reasonable expectation of privacy? What if that person is a celebrity? Who then has control of their likeness and is there any recourse for inappropriate or illegal use of that likeness?

Not long ago (9 years), one of the television stations I worked at began using digital avatars of their on-air news anchors and meteorologists. Their digital selves were made to walk onto the corner of your computer screen or television set and tell you what the weather forecast would be or notify you of breaking news. Most of the time, though, it was promoting the programming of the station. This new digital presence didn’t last long, because there were some concerns on the part of the on-air personalities of what their likenesses would be used for beyond what they agreed to, and let’s not the forget the basic issue of compensation. How do you compensate a person for their likeness? Royalty fees? What happens when those on-air personalities move on to other networks? How can they know that their digital selves have been deleted?

3D capture and virtual reality are definitely some very fun and creative outlets that can make a huge difference in medicine, education or even specific storytelling. However, unlike cameras on cellphones and the now ubiquitous selfie, treating 3D and virtual capture in the same way would be detrimental and controversial.