Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
Session 6B: SS3: Multimedia Computing for Intelligent Life
Friday, 06/Jan/2017:
9:00am - 10:20am

Session Chair: Wen-Huang Cheng
Location: V102
1st floor, 2nd room on left.

Session Abstract

Show help for 'Increase or decrease the abstract text size'
9:00am - 9:20am

Deep Learning based Intelligent Basketball Arena with Energy Image

Wu Liu1, Jiangyu Liu2, Xiaoyan Gu3, Kun Liu1, Xiaowei Dai2, Huadong Ma1

1Beijing University of Posts and Telecommunications; 2Zepp Labs, Inc.; 3Information of Information Engineering, Chinese Acadmic of Science

With the development of computer vision and artificial intelligence technologies, the “Intelligent Arena” is becoming one of the new- emerging applications and research topics. Different from conventional sports video highlight detection, the intelligent playground can supply real-time and automatic sport video broadcast, highlight video generation, and sport technological analysis. In this paper, we have proposed a deep learning based intelligent basketball arena system to automatically broadcast the basketball match. First of all, with multiple cameras around the playground, the proposed system can automatically select the best camera to supply real-time high-quality broadcast. Furthermore, with basketball energy image and deep conventional neural network, we can accurately capture the scoring clips as the highlight video clips to supply the wonderful actions replay and online sharing. Finally, evaluations on a built real-world basketball match dataset demonstrate that the proposed system can obtain 94.59% accuracy with only 45ms process- ing time (i.e., 10ms live camera selection, 30ms hotspot area detection, and 5ms BEI+CNN) for each frame. As the outstanding performance, the proposed system has already been integrated into the commercial intelligent basketball arena applications.

9:20am - 9:40am

Human Pose Tracking using Online Latent Structured Support Vector Machine

Kai-Lung Hua2, Irawati Nurmala Sari2, Mei-Chen Yeh1

1National Taiwan Normal University; 2National Taiwan University of Science and Technology

Tracking human poses in a video is a challenging problem and has numerous applications. The task is particularly difficult in realistic scenes because of several intrinsic and extrinsic factors, including complicated and fast movements, occlusions and lighting changes. We propose an online learning approach for tracking human poses using latent structured Support Vector Machine (SVM). The first frame in a video is used for training, in which body parts are initialized by users and tracking models are learned using latent structured SVM. The models are updated for each subsequent frame in the video sequence. To solve the occlusion problem, we formulate a Prize-Collecting Steiner tree (PCST) problem and use a branch-and-cut algorithm to refine the detection of body parts. Experiments using several challenging videos demonstrate that the proposed method outperforms two state-of-the-art methods.

9:40am - 10:00am

A Sensor-based Official Basketball Referee Signals Recognition System Using Deep Belief Networks

Chung-Wei Yeh, Tse-Yu Pan, Min-Chun Hu

National Cheng Kung University, Taiwan, Republic of China

In a basketball game, basketball referees who have the responsibility to enforce the rules and maintain the order of the basketball game has only a brief moment to determine if an infraction has occurred, later they communicate with the scoring table using hand signals. In this paper, we propose a novel system which can not only recognize the basketball referees’ signals but also communicate with the scoring table in real-time. Deep belief network and time-domain feature are utilized to analyze two heterogeneous signals, surface electromyography (sEMG) and three-axis accelerometer (ACC) to recognize dynamic gestures. Our recognition method is evaluated by a dataset of 9 various official hand signals performed by 11 subjects. Our recognition model achieves acceptable accuracy rate, which is 97.9% and 90.5% for 5-fold Cross Validation (5-foldCV) and Leave-One-Participant-Out Cross Validation (LOPOCV) experiments, respectively. The accuracy of LOPOCV experiment can be further improved to 94.3% by applying user calibration.

10:00am - 10:20am

egoPortray: Visual Exploration of the Mobile Communication Signature from Egocentric Network Perspective

Qing Wang1, Jiansu Pu2, Yuanfang Guo3, Zheng Hu1, Hui Tian1

1State Key Laboratory of Networking and Switching Technology, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China; 2CompleX Lab, Web Sciences Center, Big Data Research Center, University of Electronic Science and Technology of China, Chengdu 611731, China; 3State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China

The coming big data era calls for new methodologies to process and analyze the huge volumes of data. Visual analytics is becoming increasingly crucial in data analysis, presentation, and exploring. Communication data is significant in studying human interactions and social relationships. In this paper, we propose a visual analytics system named egoPortray to interactively analyze the communication data based on directed weighted ego network model. Ego net- work (EN) is composed of a centered individual, its direct contacts (alters), and the interactions among them. Based on the EN model, egoPortray presents an overall statistical view to grasp the entire EN features distributions and correlations, and a glyph-based group view to illustrate the key EN features for com- paring different egos. The proposed system and the idea of ego network can be generalized and applied in other fields where network structure exits.

Contact and Legal Notice · Contact Address:
Conference: MMM2017
Conference Software - ConfTool Pro 2.6.107+TC
© 2001 - 2017 by H. Weinreich, Hamburg, Germany