Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Session 2B: SS1: Social Media Retrieval and Recommendation
For more details of this session, please visit: http://mmm2017.ru.is/index.php/special-session-1-social-media-retrieval-and-recommendation/.
1:30pm - 1:50pm
LingoSent — A Platform for Linguistic Aware Sentiment Analysis for Social Media Messages
Tianjin Univ, China, People's Republic of
Sentiment analysis is an important natural language processing (NLP) task and applied to a wide range of scenarios. Social media messages such as tweets often differ from formal writing, exhibiting unorthodox capitalization, expressive lengthenings, Internet slang, etc. While such characteristics are inherently beneficial for the task of sentiment analysis, they also pose new challenges for existing NLP platforms.In this article, we present a new approach to improve lexicon-based sentiment analysis by extracting and utilizing linguistic features in a comprehensive manner. In contrast to existing solutions, we design our sentiment analysis approach as a framework with data preprocessing, linguistic feature extraction and sentiment calculation being separate components. This allows for easy modification and extension of each compo- nent. More importantly, we can easily configure the sentiment calculation with respect to the extracted features to optimize sentiment analysis for different application contexts. In a comprehensive evaluation, we show that our system outperforms existing state-of-the-art lexicon-based sentiment analysis solutions.
1:50pm - 2:10pm
Collaborative Dictionary Learning and Soft Assignment for Sparse Coding of Image Features
1College of Information and Engineering, Capital Normal University, Beijing 100048, P.R.China; 2Institute of Computing Technology, Chinese Academy of Sciences, China
In computer vision, the bag-of-words (BoW) model has been widely applied to image related tasks, such as large scale image retrieval, image classification, and object categorization. The sparse coding (SC) method which leverages SC as a means of feature coding can guarantee both sparsity of coding vector and lower reconstruction error in the BoW model. Thus it can achieve better performance than the traditional vector quantization method. However, it suffers from the side effect introduced by the non-smooth sparsity regularizer that quite different words may be selected for similar patches to favor sparsity, resulting in the loss of correlation between the corresponding coding vectors. To address this problem, in this paper, we propose a novel soft assignment method based on index combination of top-2 large sparse codes of local descriptors to make the SC-based BoW tolerate the case of different word selection for similar patches. To further ensure similar patches select same words to generate similar coding vectors, we propose a collaborative dictionary learning method through imposing the sparse code similarity regularization factor along with the row sparsity regularization across data instances on top of group sparse coding. Experiments on the well-known public Oxford dataset demonstrate the effectiveness of our proposed methods.
2:10pm - 2:30pm
Multi-Task Multi-modal Semantic Hashing for Web Image Retrieval with Limited Supervision
1Wuhan University of Technology, China, People's Republic of; 2School of Information Technology and Electrical Engineering, The University of Queensland, Australia; 3School of Computing, National University of Singapore
As an important element of social media, social images be- come more and more important to our daily life. Recently, smart hashing scheme has been emerging as a promising approach to support fast social image search. Leveraging semantic labels have shown effectiveness for hashing. However, semantic labels tend to be limited in terms of quantity and quality. In this paper, we propose Multi-Task Multi-modal Semantic Hashing (MTMSH) to index large scale social image data collection with limited supervision. MTMSH improves search accuracy via improving more semantic information from two aspects. First, latent multi-modal structure among labeled and unlabeled data, is explored by Multiple Anchor Graph Learning(MAGL) to enhance the quantity of semantic information. In addition, multi-task based Share Hash Space Learning (SHSL) is proposed to improve the semantic quality. Further, MGAL and SHSL are integrated using a joint framework, where semantic function and hash functions mutually reinforce each other. Then, an alternating algorithm, whose time complexity is linear to the size of training data, is also proposed. Experimental results on two large scale real world image datasets demonstrate the effectiveness and efficiency of MTMSH.
2:30pm - 2:50pm
Object-based Aggregation of Deep Features for Image Retrieval
Dalian University of Technology, China, People's Republic of
In content-based visual image retrieval, image representation is one of the fundamental issues in improving retrieval performance. Recently Convolutional Neural Network (CNN) features have shown their great success as a universal representation. However, the deep CNN features lack invariance to geometric transformations and object compositions, which limits their robustness for scene image retrieval. Since a scene image always is composed of multiple objects which are crucial components to understand and describe the scene, in this paper we propose an object-based aggregation method over the CNN features for obtaining an invariant and compact image representation for image retrieval. The proposed method represents an image through VLAD pooling of CNN features describing the underlying objects, which make the representation robust to spatial layout of objects in the scene and invariant to general geometric transformations. We evaluate the performance of the proposed method on three public ground-truth datasets by comparing with state-of-the-art approaches and promising improvements have been achieved.
2:50pm - 3:10pm
Uyghur Language Text Detection in Complex Background Images Using Enhanced MSERs
Chinese Academy of Sciences, China, People's Republic of
Text detection in complex background images is an important prerequisite for many image content analysis tasks. Actually, nearly all the widely used methods of text detection focus on English and Chinese while some minority languages, such as Uyghur language, are paid less attention by researchers. In this paper, we propose a system which detects Uyghur language text in complex background images. First, component candidates are detected by the channel-enhanced Maximally Stable Extremal Regions (MSERs) algorithm. Then, most non-text regions are removed by a two-layer filtering mechanism. Next, the remaining component regions are connected into short chains, and the short chains are expanded by an expansion algorithm to connect the missed MSERs. Finally, the chains are identified by a Random Forest classifier. Experimental comparisons on the proposed dataset prove that our algorithm is effective for detecting Uyghur language text in complex background images. The F-measure is 84.8%, much better than the state-of-the-art performance of 75.5%.
Contact and Legal Notice · Contact Address:
|Conference Software -
ConfTool Pro 2.6.107+TC
© 2001 - 2017 by H. Weinreich, Hamburg, Germany