Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
VBS: Video Browser Showdown
Wednesday, 04/Jan/2017:
3:30pm - 7:00pm

Session Chair: Werner Bailer
Location: M201
Second floor, opposite conference halls.

Session Abstract

For more details of this session, please visit:

Show help for 'Increase or decrease the abstract text size'

Video Hunter at VBS 2017

Adam Blazek, Jakub Lokoc, David Kubon

Charles University, Czech Republic

After almost three years of development, the Video Hunter tool (formerly the Signature-Based Video Browser) has become a complex tool combining different query modalities, multi-sketches, visualizations and browsing techniques. In this paper, we present additional improvements of the tool focusing on keyword search. More specifically, we present a method relying on an external image search engine and a method relying on ImageNet labels. We also present a keyframe caching method employed by our tool.


Anastasia Moumtzidou1, Theodoros Mironidis1, Fotini Markatopoulou1,2, Stelios Andreadis1, Ilias Gialampoukidis1, Damianos Galanopoulos1, Anastasia Ioannidou1, Stefanos Vrochidis1, Vasileios Mezaris1, Ioannis Kompatsiaris1, Ioannis Patras2

1ITI-CERTH, Greece; 2School of Electronic Engineering and Computer Science, QMUL, UK

This paper presents VERGE interactive video retrieval engine, which is capable of browsing and searching into video content. The system integrates several content-based analysis and retrieval modules including concept detection, clustering, visual similarity search, object-based search, query analysis and multimodal and temporal fusion.

Storyboard-based Video Browsing Using Color and Concept Indices

Wolfgang Hürst1, Algernon Ip Vai Ching1, Klaus Schoeffmann2, Manfred J. Primus2

1Utrecht University, Netherlands, The; 2Klagenfurt University, Austria

We present an interface for interactive video browsing where users visually skim storyboard representations of the files in search for known items (known-item search tasks) and textually described subjects, objects, or events (ad-hoc search tasks). Individual segments of the video are represented as a color-sorted storyboard that can be addressed via a color-index. Our storyboard representation is optimized for quick visual inspections considering results from our ongoing research. In addition, a concept based-search is used to filter out parts of the storyboard containing the related concept(s), thus complementing the human-based visual inspection with a semantic, content-based annotation.

Enhanced Retrieval and Browsing in the IMOTION System

Luca Rossetto1, Ivan Giangreco1, Claudiu Tanase1, Heiko Schuldt1, Stéphane Dupont2, Omar Seddati2

1University of Basel, Switzerland; 2Université de Mons, Belgium

This paper presents the IMOTION system in its third version. While still focusing on sketch-based retrieval, we improved upon the semantic retrieval capabilities introduced in the previous version by adding more detectors and improving the interface for semantic query specification. In addition to previous year's system, we increase the role of features obtained from Deep Neural Networks in three areas: semantic class labels for more entry-level concepts, hidden layer activation vectors for query-by-example and 2D semantic similarity results display. The new graph-based result navigation interface further enriches the system's browsing capabilities. The updated database storage system ADAMpro designed from the ground up for large scale multimedia applications ensures the scalability to steadily growing collections.

Concept-Based Interactive Search System

Yi-Jie Lu, Phuong Anh Nguyen, Hao Zhang, Chong-Wah Ngo

City University of Hong Kong, Hong Kong S.A.R. (China)

Our successful multimedia event detection system at TRECVID 2015 showed its strength on handling complex concepts in a query. The system was based on a large number of pre-trained concept detectors for textual-to-visual relation. In this paper, we enhance the system by enabling human-in-the-loop. In order to facilitate a user to quickly find an information need, we incorporate concept screening, video reranking by highlighted concepts, relevance feedback and color sketch to refine a coarse retrieval result. The aim is to eventually come up with a system suitable for both Ad-hoc Video Search and Known-Item Search. In addition, as the increasing awareness of difficulty in distinguishing shots of very similar scenes, we also explore the automatic story annotation along the timeline of a video, so that a user can quickly grasp the story happened in the context of a target shot and reject shots with incorrect context. With the story annotation, a user can refine the search result as well by simply adding a few keywords in a special "context field" of a query.

Semantic Extraction and Object Proposal for Video Search

Vinh-Tiep Nguyen1, Thanh Duc Ngo2, Duy-Dinh Le2, Minh-Triet Tran1, Duc Anh Duong2, Shin'ichi Satoh3

1University of Science, VNU-HCM, Vietnam; 2University of Information Technology, VNU-HCM, Vietnam; 3National Institute of Informatics, Tokyo, Japan

In this paper, we propose two approaches to deal with the problems of video searching: ad-hoc video search and known item search. First, we propose to combine multiple semantic concepts extracted from multiple networks trained on many data domains. Second, to help user find exactly video shot that has been shown before, we propose a sketch based search system which detects and indexes many objects proposed by an object proposal algorithm. By this way, we not only leverage the concepts but also the spatial relations between them.

Collaborative Feature Maps for Interactive Video Search

Klaus Schoeffmann1, Manfred Juergen Primus1, Bernd Muenzer1, Stefan Petscharnig1, Christof Karisch1, Qing Xu2, Wolfgang Huerst3

1Klagenfurt University, Austria; 2Tianjin University, China; 3Utrecht University, The Netherlands

This extended demo paper summarizes our interface used for the Video Browser Showdown (VBS) 2017 competition, where visual and textual known-item search (KIS) tasks, as well as ad-hoc video search (AVS) tasks in a 600-hours video archive need to be solved interactively. To this end, we propose a very flexible distributed video search system that combines many ideas of related work in a novel and collaborative way, such that several users can work together and explore the video archive in a complementary manner.

The main interface is a perspective Feature Map, which shows keyframes of shots arranged according to a selected content similarity feature (e.g., color, motion, semantic concepts, etc.). This Feature Map is accompanied by additional views, which allow users to search and filter according to a particular content feature. For collaboration of several users we provide a cooperative heatmap that shows a synchronized view of inspection actions of all users. Moreover, we use collaborative re-ranking of shots (in specific views) based on retrieved results of other users.

Contact and Legal Notice · Contact Address:
Conference: MMM2017
Conference Software - ConfTool Pro 2.6.107+TC
© 2001 - 2017 by H. Weinreich, Hamburg, Germany