Search

K-Content News

Smart Image Search & Generation Service: Innovation in Location Scouting
  • November 28, 2018
Smart Image Search & Generation Service: Innovation in Location Scouting

Tourist attractions known for their amazing scenery often become famous solely by word of mouth. For many professional photographers or video producers, visiting these ‘attractions’ goes far beyond having fun and is a matter of making a living. However, it is no easy task doing the legwork required to find these places time and time again. The ‘Location Mapping-based Smart Image Contents Generation and Service Technology Development’ study, sponsored by the Korea Creative Content Agency and managed by the Electronics and Telecommunications Research Institute (ETRI), is a project that may provide a solution to this problem. We visited the Electronics and Telecommunications Research Institute to meet with the project’s lead researcher, Doctor Jang Yoon-sep, and discuss the details of the project, which was completed in March.

A Project Begun for Video Producers

Doctor Jang began the interview by saying, “The purpose of our study was to resolve many of the problems frequently experienced in movie or drama production. The technology we have developed is able to resolve three major problems that occur on the site of a shoot, the first of which is finding the right shooting location. This is usually something that is done by professionals called ‘location hunters’, but it involves a substantial amount of time and money, and these location hunters can’t really share information with one another.”

The resolve this problem, the team sought to create a system that searches images based on photo/video location and orientation. According to Dr. Jang, all smartphones already save photo/video location and orientation data, but this type of data is not being utilized properly. It occurred to the research team that analyzing and categorizing photos uploaded by the general public on social media based on location and orientation could greatly reduce the amount of location work done by professional image and video producers.

In order to begin collecting and analyzing data, the research team came up with a natural language-based semantic search method for images. Using the system, certain keywords or everyday sentences could be used to find a corresponding location. When a whole sentence is entered, the system extracts words and phrases associated with location and returns appropriate results. Entering search conditions turns up a list of images that satisfies the conditions. Once the user selects an image of interest, the system shows where the image was taken, in what orientation, and from which angle.

What sets this search function apart from similar conventional services? Dr. JANG explained, “There are other photo search services available for location hunting. However, these services just show a list of photos associated with the type of location you’re looking for, making it difficult to find out exactly where a photo was taken or what the surrounding area looks like. At the end of the day, this means that you need to go to the location yourself to find out more information. Our technology, in conjunction with 3D map services like Daum’s Roadview, overcomes some of these limitations. We have a Roadview screen in the background, and have certain photos floating in front of the background in the location and orientation they were taken in. This allows users to check the actual location of the site and see what the surrounding area looks like.”

Capitalizing on the Film-making Process as a Tourism Resource

Another topic the research team focused on was utilizing drama and movie filmmaking as a significant tourism resource. The filming sites for Korean dramas are main tourism attractions for Hallyu fans. However, in most cases, nothing related to the filming remains on site. What if we were to create digital contents that preserve the vibrancy of the film-making process? Dr. Jang said that this is something that could be done quite easily. Tourists visiting a filming location could be handed smart devices such as tablet PCs, so that they could enjoy AR contents. They could then walk around the area looking through the camera on their smart device, and the AR interface could guide them toward any nearby film-making contents. Once at the site of a shoot, they could point their camera in a certain direction, and an overlay of images and videos from the production site could be shown on top of what the site looks like now.

Embedded Artistic Copyright Infringement Prevention Technology

The last piece of technology that Dr. Jang explained to us was slightly different from the previous two, but was similar in that it had been developed to address an important problem faced by video producers. The technology is an AI solution that analyzes copyright data for the artistic works included in an image, preventing the possibility of copyright infringement. It is nearly impossible for video producers to accurately obtain the copyright information for all of the drawings and sculptures they capture through their lenses. A scene from the American movie, The Zero Theorem, by director Terry Gilliam, had to be cut out because the original artists of a mural shown in the background of one of the movie scenes filed a lawsuit. This example illustrates the importance of having a system that intelligently recognizes artwork captured on film and analyzes copyright information associated with the artwork. Dr. Jang explained, “The technology analyzes artwork that appears in a video in real time. Even if the work is partially obstructed by an actor or actress and only part of it is showing, we are still able to identify it.”

BANG, Seung-eon│Correspondent | earny00@gmail.com