首页 | 本学科首页   官方微博 | 高级检索  
     


Video retrieval framework based on color co-occurrence feature of adaptive low rank extracted keyframes and graph pattern matching
Affiliation:1. AGH University of Science and Technology, 30 Mickiewicza Ave, Kraków 30-059, Poland;2. VSB Technical University of Ostrava, 17. listopadu 2172/15, Ostrava-Poruba 708 00, Czech Republic;1. Ryerson University;2. Arizona State University;3. Illinois Institute of Technology;4. University of Guelph;1. School of Economics and Management, Chang''an University, Xi''an 710064, China;2. Computer & Information Sciences Department, Universiti Teknologi PETRONAS, 32610 Seri Iskandar, Perak, Malaysia;3. Institute of IR4.0, Universiti Kebangsaan Malaysia, 43600, Bangi, Selangor, Malaysia;4. College of Engineering, Al Ain University, Al Ain, United Arab Emirates;5. Department of Mathematics, College of Science, Tafila Technical University, Tafila, Jordan;1. School of Economics and Management, Jiangsu University of Science and Technology, Zhenjiang, 212100, China;2. School of Economics and Management, Shangqiu Normal University, Shangqiu, 476000, China;3. School of Management, Guangzhou Xinhua University, Dongguan, 523133, China;4. School of Business, Sun Yat-Sen University, Guangzhou, 310003, China
Abstract:In recent times, exploration of multimedia required ever increasing demand and application for intelligent video retrieval from repositories. This paper presents an efficient video retrieval framework by employing the effective singular value decomposition and computationally low complex ordered dither block truncation coding to extract simple, compact, and well discriminative Color Co-occurrence Feature (CCF). In this context, the occurrence probability of a video frame pixel in the neighborhood is employed to formulate this specific and distinct feature. Moreover, we applied a new adaptive low rank thresholding based on energy concentricity, transposition, and replacement invariance characteristics to formulate a unified fast shot boundary detection approach to solve the protuberant bottleneck problem for real-time cut and gradual transition that eventually contributes for effective keyframes extraction. Therefore, we can assert that the keyframes are distinct and discriminative to represent the whole video content. For effective indexing and retrieval, it is imperative to formulate similarity score evaluator for the encapsulated contextual video information with substantial temporal consistency, least computation, and post-processing. Therefore, we introduced graph-based pattern matching for video retrieval with an aim to sustain temporal consistency, accuracy and time overhead. Experimental results signify that the proposed method on average provides 7.40% and 17.91% better retrieval accuracy and 23.21% and 20.44% faster than the recent state-of-the-art methods for UCF11 and HMDB51 standard video dataset, respectively.
Keywords:Content based video retrieval  Color Co-Occurrence Feature  Shot boundary detection  Keyframe extraction  Graph based matching  Ordered dither block truncation coding
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号