K-Content News

The Birth of the Autonomous Vehicle Contents Market, Starting with Interface Development
  • November 28, 2018
The Birth of the Autonomous Vehicle Contents Market, Starting with Interface Development
Interfaces—The First Step Toward Creating a Next-generation Contents Ecosystem

Experts are forecasting that autonomous driving vehicles (AVs) will be a powerful contents consumption platform for the next generation. Once humans become less involved in the driving process (and automation takes over), content consumption in vehicles will dramatically increase. However, even if AV technology is perfected, without a proper interface, the AV contents market will have a difficult time taking root. The Korea Creative Content Agency is supporting ‘the study of infotainment contents and the interaction development of next-generation mobile spaces’. This research is aimed at developing interfaces for the consumption of contents in AVs and exploring different interface possibilities. Participating in the study are the Korea Electronics Technology Institute, Reakosys Inc., NMedia, and the University Industry Foundation at Yonsei University. We met with Senior Research Fellow AHN, Yang-Geun, who is heading up the study, and discussed the contents that will be available on the roads of the future.

A Hands-on Experience with the AV Contents Interface

We used a simulator at the lab to try out the AV interface, which is still in development. The demonstration scenarios, divided into six areas—advertisements, tourism, games, music, education and healing—provided us with insight into what directions AV contents will take.

The first scenario, ‘advertisements’, began as a system that used cameras to figure out the user's gender. This is for the purpose of providing tailored contents. Ahn explained, “Gathering information about the user allows the system to provide contents that the user is more likely to enjoy. Knowing a user’s gender and age, for example, lets us recommend things the user might like.”

Once the system figured out my gender, the screen showed the logos from two different fashion brands and told me to look at the one that appealed to me more. I gazed at one of the logos for three to four seconds, and an advertisement from the brand automatically played. The reason the system uses ‘gaze’ as a means of control is to maximize user convenience. In the ‘tourism’ scenario, information or videos associated with a tourist attraction were played on the screen as the vehicle passes by the attraction. An interesting characteristic of the ‘games’ contents was that the AR game uses the actual road as the background. The demonstration in this category was rudimentary and based on the idea of riding a motorcycle to collect coins scattered across the road, but it is just an example of the endless possibilities.

Instead of tracking the user’s gaze, the game was controlled through ‘air touch', in which the system recognizes the rider's body movements. The system was able to recognize my movements accurately and quickly enough for me to enjoy the game. In the ‘healing’ scenario, the system analyzed my facial expression to determine my stress level, and then played images to help me calm down and relax. The ‘education’ scenario showed life-size dinosaurs roaming the streets. Air touch gestures could be used to better observe the 3D-modeled dinosaurs, allowing the user to zoom in and/or rotate the dinosaurs in midair. In the ‘music’ scenario, I tried dancing to the video shown on the screen. The gesture recognition function measured how well I moved to the music.

Final Objectives and Forecast for the Study

The experience was quite intriguing and enough to fill me with anticipation. The demonstrations made it feel like various interfaces that we’ve only seen in science fiction movies are soon to become a reality.

Ahn went on to explain the future plans of the research team. According to Ahn, one of the team’s objectives for next year is to go beyond AR and realize MR (mixed reality). Fundamentally, MR is similar to AR in that real-world elements are overlaid with digital elements. What sets the two apart is that whereas AR contents can clearly be distinguished from the real world, MR is, at least in theory, supposed to be indistinguishable. Ahn explained that the successful development of MR will allow the team to place virtual signs or screens on the walls of real-world buildings.

Another objective of the team is to adopt AI (artificial intelligence). “We are trying to figure out how to go beyond basic voice command recognition in contents consumption. For example, we might be looking into ways of having AI recognize full sentences from a user describing the type of contents he or she wants, then providing the appropriate contents,” Ahn said.

BANG, Seung-eon│Correspondent | earny00@gmail.com