Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
Session 5B: Demonstrations
Time:
Thursday, 05/Jan/2017:
2:00pm - 4:00pm

Session Chair: Cathal Gurrin
Location: Sun
Rotunda

Show help for 'Increase or decrease the abstract text size'
Presentations

V-Head: Face Detection and Alignment for Facial Augmented Reality Applications

Zhiwei Wang, Xin Yang

Huazhong University of Science and Technology, China, People's Republic of

Efficient and accurate face detection and alignment are key techniques for facial augmented reality (AR) applications. In this paper, we introduce V-Head, a facial AR system which consists of three major components: 1) joint face detection and shape initialization which can efficiently localize facial regions based on the proposed face probability map and a multipose classifier and meanwhile explicitly produces a roughly aligned initial shape, 2) cascade face alignment to locate 2D facial landmarks on the detected face, and 3) 3D head pose estimation based on the perspective-n-point (PnP) algorithm so as to overlay 3D virtual objects on the detected faces.The demonstration can be accessed from https://drive.google.com/open?id=0B-H2fYiPunUtRHBFTDRzRkZvVEE.


A demo for Image-based Personality Test

Huaiwen Zhang1,2, Jiaming Zhang3, Jitao Sang1, Changsheng Xu1

1Institute of Automation, Chinese Academy of Sciences; 2University of Chinese Academy of Sciences; 3Shandong University of Technology

In this demo, we showcase an image-based personality test. Compared with the traditional text-based personality test, the proposed new test is more natural, objective, and language-insensitive. With each question consisting of images describing the same concept, the subjects are requested to choose their favorite image. Based on the choices to typically 15 - 25 questions, we can accurately estimate the subjects' personality traits. The whole process costs less than 5 minutes. The online demo adapts well to PCs and smart phones, which is available at http://www.visualbfi.org/.


A web-based service for disturbing image detection

Markos Zampoglou1, Symeon Papadopoulos1, Yiannis Kompatsiaris1, Jochen Spangenberg2

1Centre for Research and Technology Hellas, Greece; 2Deutsche Welle, Berlin, Germany

As User Generated Content takes up an increasing share of the total Internet multimedia traffic, it becomes increasingly important to protect users (be they consumers or professionals, such as journalists) from potentially traumatizing content that is accessible on the Web. In this demonstration, we present a Web service that can identify disturbing or graphic content in images. The service can be used by platforms for filtering or to warn users prior to exposing them to such content. We evaluate the performance of the service and propose solutions towards extending the training dataset and thus further improving the performance of the service, while minimizing emotional distress to human annotators.


DeepStyleCam: A Real-time Style Transfer App on iOS

Ryosuke Tanno, Shin Matsuo, Wataru Shimoda, Keiji Yanai

The University of Electro-Communications, Tokyo, Japan

In this demo, we present a very fast CNN-based style transfer system runnning on normal iPhones. The proposed app can transfer multiple pre-trained styles to the video stream captured from the built-in camera of iPhone around 140ms (7fps). We extended the network proposed as a real-time style transfer network by Johnson et al.~\cite{john16} so that the network can learn multiple styles at the same time. At the same time, we modify the CNN network so that the amount of computation is reduced one tenth compared to the original network. The very fast mobile implementation of the app are based on our paper~\cite{yana16} which describes several new ideas to implement CNN on mobile devices efficiently.


An Annotation System for Egocentric Image Media

Aaron Duane, Jiang Zhou, Suzanne Little, Cathal Gurrin, Alan F. Smeaton

Insight Centre for Data Analytics, Dublin City University, Ireland

Manual annotation of ego-centric visual media for lifelogging, activity monitoring, object counting, etc. is challenging due to the repetitive nature of the images especially for events such as driving, eating, meeting, watching television, etc. where there is no change in scenery. This makes the annotation task boring and there is danger of missing things through loss of concentration. This is particularly problematic when labelling infrequently or irregularly occurring objects or short activities. To date annotation approaches have structured visual lifelogs into events and then annotated at the event or sub-event levels but this can be limited when the annotation task is labelling a wider variety of topics -- events, activities, interactions and/or objects. Here we build on our prior experiences of annotating at event level and present a new annotation interface. This demonstration will show a software platform for annotating different levels of labels by different projects, with different aims, for ego-centric visual media.



 
Contact and Legal Notice · Contact Address:
Conference: MMM2017
Conference Software - ConfTool Pro 2.6.107+TC
© 2001 - 2017 by H. Weinreich, Hamburg, Germany