SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Fu Keren) "

Sökning: WFRF:(Fu Keren)

  • Resultat 1-31 av 31
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  • Fu, Keren, 1988, et al. (författare)
  • Adaptive Multi-Level Region Merging for Salient Object Detection
  • 2014
  • Ingår i: British Machine Vision Conference (BMVC) 2014. ; , s. 11 -
  • Konferensbidrag (refereegranskat)abstract
    • Most existing salient object detection algorithms face the problem of either under or over-segmenting an image. More recent methods address the problem via multi-level segmentation. However, the number of segmentation levels is manually predetermined and only works well on specific class of images. In this paper, a new salient object detection scheme is presented based on adaptive multi-level region merging. A graph based merging scheme is developed to reassemble regions based on their shared contourstrength. This merging process is adaptive to complete contours of salient objects that can then be used for global perceptual analysis, e.g., foreground/ground separation. Such contour completion is enhanced by graph-based spectral decomposition. We show that even though simple region saliency measurements are adopted for each region, encouraging performance can be obtained after across-level integration. Experiments by comparing with 13 existing methods on three benchmark datasets including MSRA-1000, SOD and SED show the proposed method results in uniform object enhancement and achieves state-of-the-art performance.
  •  
3.
  • Fu, Keren, 1988, et al. (författare)
  • Automatic traffic sign recognition based on saliency-enhanced features and SVMs from incrementally built dataset
  • 2014
  • Ingår i: Proceedings of the 3rd International Conference on Connected Vehicles and Expo, ICCVE 2014; Vienna; Austria; 3-7 November 2014. - 9781479967292 ; , s. 947-952
  • Konferensbidrag (refereegranskat)abstract
    • This paper proposes an automatic traffic sign recognition method based on saliency-enhanced feature and SVMs. As when human observe a traffic sign, a two-stage procedure is performed by first locating the region of sign according to its unique shape and color, and second paying attention to content inside the sign. The proposed saliency feature extraction attempts to resemble these two processing stages. We model the first stage via extracting salient regions of signs from detected bounding boxes contributed by sign detector. Salient region extraction is formed as an energy propagation process on local structured graph. The second stage is modeled by exploiting a non-linear color mapping under the guidance of the output of the first stage. As results, salient signature inside a sign is popped up and can be directly used by subsequent SVMs for classification. The proposed method is validated on Chinese traffic sign dataset that is incrementally built.
  •  
4.
  • Fu, Keren, et al. (författare)
  • Deepside: A general deep framework for salient object detection
  • 2019
  • Ingår i: Neurocomputing. - : Elsevier BV. - 0925-2312 .- 1872-8286. ; 356, s. 69-82
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep learning-based salient object detection techniques have shown impressive results compared to con- ventional saliency detection by handcrafted features. Integrating hierarchical features of Convolutional Neural Networks (CNN) to achieve fine-grained saliency detection is a current trend, and various deep architectures are proposed by researchers, including “skip-layer” architecture, “top-down” architecture, “short-connection” architecture and so on. While these architectures have achieved progressive improve- ment on detection accuracy, it is still unclear about the underlying distinctions and connections between these schemes. In this paper, we review and draw underlying connections between these architectures, and show that they actually could be unified into a general framework, which simply just has side struc- tures with different depths. Based on the idea of designing deeper side structures for better detection accuracy, we propose a unified framework called Deepside that can be deeply supervised to incorporate hierarchical CNN features. Additionally, to fuse multiple side outputs from the network, we propose a novel fusion technique based on segmentation-based pooling, which severs as a built-in component in the CNN architecture and guarantees more accurate boundary details of detected salient objects. The effectiveness of the proposed Deepside scheme against state-of-the-art models is validated on 8 benchmark datasets.
  •  
5.
  • Fu, Keren, 1988, et al. (författare)
  • Detection and Recognition of Traffic Signs from Videos using Saliency-Enhanced Features
  • 2015
  • Ingår i: Nationell konferens i transportforskning, Oct. 21-22, 2015, Karstans universitet, Sweden.. ; , s. 2-
  • Konferensbidrag (refereegranskat)abstract
    • Traffic sign recognition, including sign detection and classification, is an essential part in advanced driver assistance systems and autonomous vehicles. Traffic sign recognition (TSR), that exploits image analysis and computer vision techniques, has drawn increasing interest lately due to recently renewed efforts in vehicle safety and autonomous driving. Applications, among many others, include advanced driver assistance systems, sign inventory, intelligent autonomous driving.We propose efficient methods for detection and classification of traffic signs from automatically cropped street view images. The main novelties in the paper include:• An approach for automatic cropping of street view images from public available websites; The method detects and crops candidate traffic sign regions (bounding boxes) along the roads, from a specified route (i.e., the beginning and end points of the road), instead of conventionally using existing datasets;• An approach for generating saliency-enhanced features for the classifier. A novel method for obtaining the saliency-enhanced regions is proposed. It is based on a propagation process on enhancing sign part that attracts visual attention. Consequently, this leads to salient feature extraction. This approach overcomes the short coming in the conventional methods where features are extracted from the entire region of a detected bounding box which usually contains other clutter (or background).• A coarse-to-fine classification method that first classifies among different sign categories (e.g. categoryof forbidden, warning signs), followed by fine-classification of traffic signs within each category.The proposed methods have been tested on 2 categories of Chinese traffic signs, each containing many different signs.
  •  
6.
  • Fu, Keren, 1988, et al. (författare)
  • Effective Small Dim Target Detection by Local Connectedness Constraint
  • 2014
  • Ingår i: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. - 1520-6149. - 9781479928927 ; , s. 8110-8114
  • Konferensbidrag (refereegranskat)abstract
    • The main drawback of conventional filtering based methods for small dim target (SDT) detection is they could not guarantee sufficient suppression ability towards trivial high frequency component which belongs to background, such as strong corners and edges. To overcome this bottleneck, this paper proposes an effective SDT detection algorithm by using local connectedness constraint. Our method provides direct control for target size, ensure high accuracy and could be easily embedded into the classical sliding-window based framework. The effectiveness of the proposed method is validated using images with cluttered background.
  •  
7.
  • Fu, Keren, 1988 (författare)
  • Enhancement of Salient Image Regions for Visual Object Detection
  • 2014
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Salient object/region detection aims at finding interesting regions in images and videos, since such regions contain important information and easily attract human attention. The detected regions can be further used for more complicated computer vision applications such as object detection and recognition, image compression, content-based image editing, and image retrieval. One of the fundamental challenge in salient object detection is to uniformly emphasize desired objects and meanwhile suppress irrelevant background. Existing heuristic color contrast-based methods tend to obtain false detection in complex scenarios and attenuate the inner part of large salient objects. In order to achieve uniform object enhancement and background suppression, several new techniques including color feature integration, graph-based geodesic saliency propagation, hierarchical segmentation based on graph spectrum decomposition are developed in this thesis to assist saliency computation. Paper 1 proposes a superpixel-based salient object detection method which takes advantages of color contrast and distribution. It develops complementary abilities among hypotheses and generates high quality saliency maps. Paper 2 proposes a novel geodesic propagation method for salient region enhancement. It leverages an initial coarse saliency map that highlight potential salient regions, and then conducts geodesic propagation. Local connectivity of objects is retained after the proposed propagation. Papers 3 and 4 use graph-based spectral decomposition for hierarchical segmentation, which enhances saliency detection. As most previous work on salient region detection is done for still images, paper 5 extends graph-based saliency detection methods to video processing. It combines static appearance and motion cues to construct graph. A spatial-temporal smoothing operation is proposed on a structured graph derived from consecutive frames to maintain visual coherence in both inter- and intra- frames. All these proposed methods are validated on benchmark datasets and achieve comparable/better performance to the state-of-the-art methods.
  •  
8.
  • Fu, Keren, 1988, et al. (författare)
  • Geodesic Distance Transform-based Salient Region Segmentation for Automatic Traffic Sign Recognition
  • 2016
  • Ingår i: Proceedings - 2016 IEEE Intelligent Vehicles Symposium, IV 2016, Gotenburg, Sweden, 19-22 June 2016. - 9781509018215 ; 2016-August, s. 948-953
  • Konferensbidrag (refereegranskat)abstract
    • Visual-based traffic sign recognition (TSR) requiresfirst detecting and then classifying signs from capturedimages. In such a cascade system, classification accuracy is often affected by the detection results. This paper proposes a method for extracting a salient region of traffic sign within a detection window for more accurate sign representation and feature extraction, hence enhancing the performance of classification. In the proposed method, a superpixel-based distance map is firstly generated by applying a signed geodesic distance transform from a set of selected foreground and background seeds. An effective method for obtaining a final segmentation from the distancemap is then proposed by incorporating the shape constraints of signs. Using these two steps, our method is able to automatically extract salient sign regions of different shapes. The proposed method is tested and validated in a complete TSR system. Test results show that the proposed method has led to a high classification accuracy (97.11%) on a large dataset containing street images. Comparing to the same TSR system without using saliency-segmented regions, the proposed method has yielded a marked performance improvement (about 12.84%). Future work will be on extending to more traffic sign categories and comparing with other benchmark methods.
  •  
9.
  • Fu, Keren, 1988, et al. (författare)
  • Geodesic Saliency Propagation for Image Salient Region Detection
  • 2013
  • Ingår i: IEEE Int'l conf. on Image Processing (ICIP 2013), Sept.15-18, Melbourne, Australia. - 9781479923410 ; , s. 3278-3282
  • Konferensbidrag (refereegranskat)abstract
    • This paper proposes a novel geodesic saliency propagation method where detected salient objects may be isolated from both the background and other clutters by adding global considerations in the detection process. The method transmits saliency energy from a coarse saliency map to all image parts rather than from image boundaries in conventional cases. The coarse saliency map is computed using the combination of global contrast and Harris convex hull. Superpixels from pre-segmented image are used as pre-processing to further enhance the efficiency. The proposed propagation is geodesic distance assisted and retains the local connectivity of objects. It is capable of rendering a uniform saliency map while suppressing the background, leading to salient objects being popped out. Experiments were conducted on a benchmark dataset, visual comparisons and performance evaluations with eight existing methods have shown that the proposed method is robust and achieves the state-of-the-art performance.
  •  
10.
  • Fu, Keren, 1988, et al. (författare)
  • Graph Construction for Salient Object Detection in Videos
  • 2014
  • Ingår i: Proceedings - International Conference on Pattern Recognition. - 1051-4651. - 9781479952083 ; , s. 2371-2376
  • Konferensbidrag (refereegranskat)abstract
    • Recently many graph-based salient region/object detection methods have been developed. They are rather effective for still images. However, little attention has been paid to salient region detection in videos. This paper addresses salient region detection in videos. A unified approach towards graph construction for salient object detection in videos is proposed. The proposed method combines static appearance and motion cues to construct graph, enabling a direct extension of original graph based salient region detection to video processing. To maintain coherence in both intra- and inter-frames, a spatial-temporal smoothing operation is proposed on a structured graph derived from consecutive frames. The effectiveness of the proposed method is tested and validated using seven videos from two video datasets.
  •  
11.
  • Fu, Keren, 1988, et al. (författare)
  • Learning full-range affinity for diffusion-based saliency detection
  • 2016
  • Ingår i: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. - 1520-6149. - 9781479999880 ; 2016-May, s. 1926-1930
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we address the issue of enhancing salient object detection through diffusion-based techniques. For reliably diffusing the energy from labeled seeds, we propose a novel graph-based diffusion scheme called affinity learning-based diffusion (ALD), which is based on learning full-range affinity between two arbitrary graph nodes. The method differs from the previous existing work where implicit diffusion was formulated as a ranking problem on a graph. In the proposed method, the affinity learning is achieved in a unified graph-based semi-supervised manner, whose outcome is leveraged for global propagation. By properly selecting an affinity learning model, the proposed ALD outperforms the ranking-based diffusion in terms of accurately detecting salient objects and enhancing the correct salient objects under a range of background scenarios. By utilizing the ALD, we propose an enhanced saliency detector that outperforms 7 recent state-of-the-art saliency models on 3 benchmark datasets.
  •  
12.
  • Fu, Keren, 1988, et al. (författare)
  • Normalized Cut-based Saliency Detection by Adaptive Multi-Level Region Merging
  • 2015
  • Ingår i: IEEE Transactions on Image Processing. - 1941-0042 .- 1057-7149. ; 24:12, s. 5671-5683
  • Tidskriftsartikel (refereegranskat)abstract
    • Existing salient object detection models favor over-segmented regions upon which saliency is computed. Such local regions are less effective on representing object holistically and degrade emphasis of entire salient objects. As a result, existing methods often fail to highlight an entire object in complex background. Towards better grouping of objects and background, in this paper we consider graph cut, more specifically the Normalized graph cut (Ncut) for saliency detection. Since the Ncut partitions a graph in a normalized energy minimization fashion, resulting eigenvectors of the Ncut contain good cluster information that may group visual contents. Motivated by this, we directly induce saliency maps via eigenvectors of the Ncut, contributing to accurate saliency estimation of visual clusters. We implement the Ncut on a graph derived from a moderate number of superpixels. This graph captures both intrinsic color and edge information of image data. Starting from the superpixels, an adaptive multi-level region merging scheme is employed to seek such cluster information from Ncut eigenvectors. With developed saliency measures for each merged region, encouraging performance is obtained after across-level integration. Experiments by comparing with 13 existing methods on four benchmark datasets including MSRA-1000, SOD, SED and CSSD show the proposed method, Ncut saliency (NCS), results in uniform object enhancement and achieves comparable/better performance to the state-of-the-art methods.
  •  
13.
  • Fu, Keren, 1988, et al. (författare)
  • One-class support vector machine-assisted robust tracking
  • 2013
  • Ingår i: Journal of Electronic Imaging. - 1017-9909. ; 22:2, s. 11-
  • Tidskriftsartikel (refereegranskat)abstract
    • Recently, tracking is regarded as a binary classification problem by discriminative tracking methods. However, such binary classification may not fully handle the outliers, which may cause drifting. We argue that tracking may be regarded as one-class problem, which avoids gathering limited negative samples for background description. Inspired by the fact the positive feature space generatedby one-class support vector machine (SVM) is bounded by a closed hyper sphere, we propose a tracking method utilizing one-class SVMs that adopt histograms of oriented gradient and 2bit binary patterns as features. Thus, it is called the one-class SVM tracker (OCST). Simultaneously, an efficient initialization and online updating scheme is proposed. Extensive experimental results prove that OCST outperforms some state-of-the-art discriminative tracking methods that tackle the problem using binary classifiers on providing accurate tracking and alleviating serious drifting.
  •  
14.
  • Fu, Keren, 1988, et al. (författare)
  • One-Class SVM Assisted Accurate Tracking
  • 2012
  • Ingår i: 6th ACM/IEEE Int'l Conf on Distributed Smart Cameras (ICDSC 12), Oct 30 - Nov.2, 2012, Hong Kong. ; , s. 6 pages-
  • Konferensbidrag (refereegranskat)abstract
    • Recently, tracking is regarded as a binary classification problem by discriminative tracking methods. However, such binary classification may not fully handle the outliers, which may cause drifting. In this paper, we argue that tracking may be regarded as one-class problem, which avoids gathering limited negative samples for background description. Inspired by the fact the positive feature space generated by One-Class SVM is bounded by a closed sphere, we propose a novel tracking method utilizing One-Class SVMs that adopt HOG and 2 bit-BP as features, called One-Class SVM Tracker (OCST). Simultaneously an efficient initialization and online updating scheme is alsoproposed. Extensive experimental results prove that OCST outperforms some state-of-the-art discriminative tracking methods on providing accurate tracking and alleviating serious drifting.
  •  
15.
  • Fu, Keren, 1988, et al. (författare)
  • Recognition of Chinese Traffic Signs from Street Views
  • 2015
  • Rapport (övrigt vetenskapligt/konstnärligt)abstract
    • This technical report describes the research work on automatic recognizing Chinese traffic signs from an implicit public resource, i.e. street views. First, we give a comprehensive survey on Chinese traffic signs and introduce our approaches for collecting street view images that can be used for experimental purposes. Then, we introduce our coarse-to-fine recognition framework consisting of sign detection, sign salient region segmentation, feature extraction (including simple text recognition from signs), and subsequent sign classification. We also propose to incrementally build a sign dataset in a semi-automatic way, aiming at reducing manual effort. Experiments on collected datasets for both sign detection and classification have validated that the proposed framework is feasible and capable of recognizing multiple categories of Chinese traffic signs in a single input image.
  •  
16.
  • Fu, Keren, 1988, et al. (författare)
  • Refinet: A Deep Segmentation Assisted Refinement Network for Salient Object Detection
  • 2019
  • Ingår i: IEEE Transactions on Multimedia. - 1520-9210. ; 21:2, s. 457-469
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep convolutional neural networks (CNNs) recently have been successfully applied to saliency detection with improved performance on locating salient objects, as comparing to conventional saliency detection by handcrafted features. Unfortunately, due to repeated sub-sampling operations inside CNNs such as pooling and convolution, many CNN-based saliency models fail to maintain fine-grained spatial details and boundary structures of objects. To remedy this issue, this paper proposes a novel end-to-end deep learning-based refinement model named Refinet, which is based on fully convolutional network augmented with segmentation hypotheses. Intermediate saliency maps which are edge-aware are computed from segmentation-based pooling and then cancatenating two streams into a fully convolutional network for effective fusion and refinement, leading to more precise object details and boundaries. In addition, the resolution of feature maps in the proposed Refinet is carefully designed to guarantee sufficient boundary clarity of the refined saliency output. Compared to widely employed dense conditional random field (CRF), Refinet is able to enhance coarse saliency maps generated by existing models with more accurate spatial details, and its effectiveness is demonstrated by experimental results on 7 benchmark datasets.
  •  
17.
  • Fu, Keren, 1988, et al. (författare)
  • Robust manifold-preserving diffusion-based saliency detection by adaptive weight construction
  • 2016
  • Ingår i: Neurocomputing. - : Elsevier BV. - 0925-2312 .- 1872-8286. ; 175:Part A, s. 336-347
  • Tidskriftsartikel (refereegranskat)abstract
    • Graph-based diffusion techniques have drawn much interest lately for salient object detection. The diffusion performance is heavily dependent on the edge weights in graph representing the similarity between nodes, and are usually set through manually tuning. To improve the diffusion performance, this paper proposes a robust diffusion scheme, referred to as manifold-preserving diffusion (MPD), that is built jointly on two assumptions for preserving the manifold used in saliency detection. The smoothness assumption reflects the conditional random field (CRF) property and the related penalty term enforces similar saliency on similar graph neighbors. The penalty term related to the local reconstruction assumption enforces a local linear mapping from the feature space to saliency values. Graph edge weights in the above two penalties in the proposed MPD method are determined adaptively by minimizing local reconstruction errors in feature space. This enables a better adaption of diffusion on different images. The final diffusion process is then formulated as a regularized optimization problem, taking into account of initial seeds, manifold smoothness and local reconstruction. Consequently, when applied to saliency diffusion, MPD provides a higher performance upper bound than some existing diffusion methods such as manifold ranking. By utilizing MPD, we further introduce a two-stage saliency detection scheme, referred to as manifold-preserving diffusion-based saliency (MPDS), where boundary prior, Harris convex hull, and foci convex hull are employed for deriving initial seeds and a coarse map for MPD. Experiments were conducted on five benchmark datasets and compared with eight existing methods. Our results show that the proposed method is robust in terms of consistently achieving the highest weighted F-measure and lowest mean absolute error, meanwhile maintaining comparable precision–recall curves. Salient objects in different background can be uniformly highlighted in the output final saliency maps.
  •  
18.
  • Fu, Keren, 1988, et al. (författare)
  • Saliency Detection by Fully Learning A Continuous Conditional Random Field
  • 2017
  • Ingår i: IEEE Transactions on Multimedia. - 1520-9210. ; 19:7, s. 1531-1544
  • Tidskriftsartikel (refereegranskat)abstract
    • Salient object detection is aimed at detecting and segmenting objects that human eyes are most focused on whenviewing a scene. Recently, conditional random field (CRF) isdrawn renewed interest, and is exploited in this field. However, when utilizing a CRF with unary and pairwise potentials having essential parameters, most existing methods only employ manually designed parameters, or learn parameters partly for the unary potentials. Observing that the saliency estimation is a continuous labeling issue, this paper proposes a novel data driven scheme based on a special CRF framework, the so-called continuous CRF (C-CRF), where parameters for both unary and pairwise potentials are jointly learned. The proposed C-CRF learning provides an optimal way to integrate various unary saliency features with pairwise cues to discover salient objects. To the best of our knowledge, the proposed scheme is the first to completely learn a C-CRF for saliency detection. In addition, we propose a novel formulation of pairwise potentials that enables learning weights for different spatial ranges on a superpixel graph. The proposed C-CRF learning-based saliency model is tested on 6 benchmark datasets and compared with 11 existing methods. Our results and comparisons have provided further support to the proposed method in terms of precision-recall and F-measure. Furthermore, incorporating existing saliency models with pairwise cues through the C-CRF is shown to provide marked boosting performance over individual models.
  •  
19.
  • Fu, Keren, 1988, et al. (författare)
  • SALIENT OBJECT DETECTION USING NORMALIZED CUT AND GEODESICS
  • 2015
  • Ingår i: Proceedings - International Conference on Image Processing, ICIP. - 1522-4880. - 9781479983391 ; 2015-December, s. 1100-1104
  • Konferensbidrag (refereegranskat)abstract
    • Normalized graph cut (Ncut) is conventionally used for partitioning a graph based on energy minimization, and is lately used for salient object detection. Observing that Ncut generates eigenvectors containing cluster information, we propose to incorporate eigenvectors of Ncut with the geodesic saliency detection model for obtaining enhanced salient object detection. In addition, appearance cue and intervening contour cue are jointly exploited for computing the graph affinity. The proposed method has been tested and evaluated on four benchmark datasets, and compared with 12 existing methods. Our results have provided strong support to the robustness of the proposed method.
  •  
20.
  • Fu, Keren, 1988 (författare)
  • Salient Region Detection Methods with Application to Traffic Sign Recognition from Street View Images
  • 2016
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • In the computer vision community, saliency detection refers to modeling the selective mechanism in human visual attentions. Outputs of saliency detection algorithms are called saliency maps, which represent conspicuousness levels of different scene areas. Since saliency detection is an effective way to estimate regions of interest that may be attractive to human eyes, numerous applications range from object recognition, image compression, to content-based image editing and image retrieval. This thesis focuses on salient region detection, which aims at detecting and segmenting holistic salient objects from natural images. Despite of many existing models/algorithms and rapid progress in this field over the past decade, improving the detection performance in complex and unconstrained scenarios remains challenging. This thesis proposes five innovative methods for salient region detection. Each method is designed to solve some issues in the existing models. The main contributions of this thesis include: 1) A novel method that induces saliency maps through eigenvectors of the normalized graph cut for better visual clustering of objects and background. It leads to more accurate saliency estimation. 2) A novel data-driven method for salient region detection based on continuous conditional random field (C-CRF). It provides an optimal way to integrate various unary saliency features with pairwise cues. 3) A robust graph-based diffusion method, referred to as manifold-preserving diffusion (MPD). Based on two assumptions on manifold---smoothness and local reconstruction, the method preserves the manifold used in the saliency diffusion. 4) A superpixel-based method that effectively computes color contrast and color distribution attributes of images in a unified manner. 5) A new geodesic propagation method that is used to optimize coarse salient regions for rendering visual coherence. In addition, driven by applications, this thesis also addresses traffic sign recognition (TSR) problem from street view images. As a new application linking between saliency detection and TSR, salient region detection of traffic signs is investigated in order to enhance the sign classification performance.
  •  
21.
  • Fu, Keren, 1988, et al. (författare)
  • Spectral salient object detection
  • 2018
  • Ingår i: Neurocomputing. - : Elsevier BV. - 0925-2312 .- 1872-8286. ; 275, s. 788-803
  • Tidskriftsartikel (refereegranskat)abstract
    • Many salient object detection methods first apply pre-segmentation on image to obtain over-segmented regions to facilitate subsequent saliency computation. However, these pre-segmentation methods often ignore the holistic issue of objects and could degrade object detection performance. This paper proposes a novel method, spectral salient object detection, that aims at maintaining objects holistically during pre-segmentation in order to provide more reliable feature extraction from a complete object region and to facilitate object-level saliency estimation. In the proposed method, a hierarchical spectral partition method based on the normalized graph cut (Ncut) is proposed for image segmentation phase in saliency detection, where a superpixel graph that captures the intrinsic color and edge information of an image is constructed and then hierarchically partitioned. In each hierarchy level, a region constituted by superpixels is evaluated by criteria based on figure-ground principles and statistical prior to obtain a regional saliency score. The coarse salient region is obtained by integrating multiple saliency maps from successive hierarchies. The final saliency map is derived by minimizing the graph-based semi-supervised learning energy function on the synthetic coarse saliency map. Despite the simple intuition of maintaining object holism, experimental results on 5 benchmark datasets including ASD, ECSSD, MSRA, PASCAL-S, DUT-OMRON demonstrate encouraging performance of the proposed method, along with the comparisons to 13 state-of-the-art methods. The proposed method is shown to be effective on emphasizing large/medium-sized salient objects uniformly due to the employment of Ncut. Besides, we conduct thorough analysis and evaluation on parameters and individual modules.
  •  
22.
  • Fu, Keren, 1988, et al. (författare)
  • Spectral salient object detection
  • 2014
  • Ingår i: Proceedings - IEEE International Conference on Multimedia and Expo. - 1945-7871 .- 1945-788X. - 9781479947614 ; 2014-September:Septmber, s. 6-
  • Konferensbidrag (refereegranskat)abstract
    • Many existing methods for salient object detection are performed by over-segmenting images into non-overlapping regions, which facilitate local/global color statistics forsaliency computation. In this paper, we propose a new approach: spectral salient object detection, which is benefited from selected attributes of normalized cut, enabling better retaining of holistic salient objects as comparing to conventionally employed pre-segmentation techniques. The proposed saliency detection method recursively bi-partitions regions that render the lowest cut cost in each iteration, resulting in binary spanning tree structure. Each segmented region is then evaluated under criterion that fit Gestalt laws and statisticalprior. Final result is obtained by integrating multiple intermediate saliency maps. Experimental results on three benchmark datasets demonstrate the effectiveness of the proposed method against 13 state-of-the-art approaches to salient object detection.
  •  
23.
  • Fu, Keren, 1988, et al. (författare)
  • Superpixel based color contrast and color distribution driven salient object detection
  • 2013
  • Ingår i: Signal Processing: Image Communication. - : Elsevier BV. - 0923-5965. ; 28:10, s. 1448-1463
  • Tidskriftsartikel (refereegranskat)abstract
    • Color is the most informative low-level feature and might convey tremendous saliency information of a given image. Unfortunately, color feature is seldom fully exploited in the previous saliency models. Motivated by the three basic disciplines of a salient object which are respectively center distribution prior, high color contrast to surroundings and compact color distribution, in this paper, we design a comprehensive salient object detection system which takes the advantages of color contrast together with color distribution and outputs high quality saliency maps. The overall procedure flow of our unified framework contains superpixel pre-segmentation, color contrast and color distribution computation, combination, and final refinement. In color contrast saliency computation, we calculate center-surrounded color contrast and then employ the distribution prior in order to select correct color components. A global saliency smoothing procedure that is based on superpixel regions is introduced as well. This processing step preferably alleviates the saliency distortion problem, leading to the entire object being highlighted uniformly. Finally, a saliency refinement approach is adopted to eliminate artifacts and recover unconnected parts within the combined saliency maps. In visual comparison, our method produces higher quality saliency maps which stress out the total object meanwhile suppress background clutter. Both qualitative and quantitative experiments show our approach outperforms 8 state-of-the-art methods, achieving the highest precision rate 96% (3% improvement from the current highest), when evaluated via one of the most popular data sets. Excellent content-aware image resizing also could be achieved using our saliency maps. (C) 2013 Elsevier B.V. All rights reserved.
  •  
24.
  • Fu, Keren, 1988, et al. (författare)
  • Traffic Sign Recognition using Salient Region Features: A Novel Learning-based Coarse-to-Fine Scheme
  • 2015
  • Ingår i: IEEE Intelligent Vehicles Symposium, June 28-July 1, 2015, Seoul, Korea. - 9781467372664 ; 2015-August, s. 443-448
  • Konferensbidrag (refereegranskat)abstract
    • Traffic sign recognition, including sign detection and classification, is essential for advanced driver assistancesystems and autonomous vehicles. This paper introduces a novel machine learning-based sign recognition scheme. In the proposed scheme, detection and classification are realized through learning in a coarse-to-fine manner. Based on the observation that signs in the same category share some common attributes in appearance, the proposed scheme first distinguishes each individual sign category from the background in the coarse learning stage (i.e. sign detection) followed by distinguishing different sign classes within each category in the fine learning stage (i.e. sign classification). Both stages are realized throughmachine learning techniques. A complete recognition scheme is developed that is effective for simultaneously recognizing multiple categories of traffic signs. In addition, a novel saliency-based feature extraction method is proposed for sign classification. The method segments salient sign regions by leveraging the geodesic energy propagation. Compared with the conventional feature extraction, our method provides more reliable feature extraction from salient sign regions. The proposed scheme istested and validated on two categories of Chinese traffic signs from Tencent street view. Evaluations on the test dataset show reasonably good performance, with an average of 97.5% true positive and 0.3% false positive on two categories of traffic signs.
  •  
25.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-saliency detection via inter and intra saliency propagation
  • 2016
  • Ingår i: Signal Processing: Image Communication. - 0923-5965. ; 44, s. 69-83
  • Tidskriftsartikel (refereegranskat)abstract
    • The goal of salient object detection from an image is to extract the regions which capture the attention of the human visual system more than other regions of the image. In this paper a novel method is presented for detecting salient objects from a set of images, known as co-saliency detection. We treat co-saliency detection as a two-stage saliency propagation problem. The first inter-saliency propagation stage utilizes the similarity between a pair of images to discover common properties of the images with the help of a single image saliency map. With the pairwise co-salient foreground cue maps obtained, the second intra-saliency propagation stage refines pairwise saliency detection using a graph-based method combining both foreground and background cues. A new fusion strategy is then used to obtain the co-saliency detection results. Finally an integrated multi-scale scheme is employed to obtain pixel-level co-saliency maps. The proposed method makes use of existing saliency detection models for co-saliency detection and is not overly sensitive to the initial saliency model selected. Extensive experiments on three benchmark databases show the superiority of the proposed co-saliency model against the state-of-the-art methods both subjectively and objectively.
  •  
26.
  • Ge, Chenjie, 1991, et al. (författare)
  • Co-saliency detection via similarity-based saliency propagation
  • 2015
  • Ingår i: 2015 IEEE International Conference on Image Processing (ICIP). ; , s. 1845 - 1849
  • Konferensbidrag (refereegranskat)abstract
    • In this paper, we present a method for discovering the common salient objects from a set of images. We treat co-saliency detection as a pairwise saliency propagation problem, which utilizes the similarity between each pair of images to measure the common property with the guidance of a single saliency map image. Given the pairwise co-salient foreground maps, pairwise saliency is optimized by combining the initial background cues. Pairwise co-salient maps are then fused according to a novel fusion strategy based on the focus of human attention. Finally we adopt an integrated multi-scale scheme to obtain the pixel-level saliency map. Our proposed model makes the existing single saliency model perform well in co-saliency detection and is not overly sensitive to the initial saliency model selected. Extensive experiments on two benchmark databases show the superiority of our co-saliency model against the state-of-the-art methods both subjectively and objectively.
  •  
27.
  • Liu, Fanghui, et al. (författare)
  • Robust visual tracking via constrained correlation filter coding
  • 2016
  • Ingår i: Pattern Recognition Letters. - : Elsevier BV. - 0167-8655. ; 84, s. 163-169
  • Tidskriftsartikel (refereegranskat)abstract
    • Unconstrained correlation filters based trackers achieve superior performance with high speed in visual tracking. However, such unconstrained correlation filters do not impose any hard constraint to their responses to have a certain value, which brings about classification ambiguity on intractable samples (i.e., two similar samples from different classes). To tackle this issue, in this paper, constrained correlation filter is introduced into visual tracking framework to alleviate classification ambiguity for more accurate target location. By imposing distinguishable hard constraints on the response map to different classes, a supervised coding method is proposed to encode various candidate samples by a discriminative filter bank. The learned high-level feature vectors are sent to a Naive Bayes classifier to separate the target from the background. Besides, parameters updating schemes in the constrained filter and classifier are introduced to adapt to appearance changes of the target with less possibility of drifting. Both qualitative and quantitative evaluations on Object Tracking Benchmark (OTB) show that the proposed tracking method achieves favorable performance compared with other state-of-the-art methods.
  •  
28.
  • Liu, Fanghui, et al. (författare)
  • Robust visual tracking via inverse nonnegative matrix factorization
  • 2016
  • Ingår i: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings. - 1520-6149. - 9781479999880 ; 2016-May, s. 1491-1495
  • Konferensbidrag (refereegranskat)abstract
    • The establishment of robust target appearance model over time is an overriding concern in visual tracking. In this paper, we propose an inverse nonnegative matrix factorization (NMF) method for robust appearance modeling. Rather than using a linear combination of nonnegative basis vectors for each target image patch in conventional NMF, the proposed method is a reverse thought to conventional NMF tracker. It utilizes both the foreground and background information, and imposes a local coordinate constraint, where the basis matrix is sparse matrix from the linear combination of candidates with corresponding nonnegative coefficient vectors. Inverse NMF is used as a feature encoder, where the resulting coefficient vectors are fed into a SVM classifier for separating the target from the background. The proposed method is tested on several videos and compared with seven state-of-the-art methods. Our results have provided further support to the effectiveness and robustness of the proposed method.
  •  
29.
  • Yun, Yixiao, 1987, et al. (författare)
  • Human Activity Recognition in Images Using SVMs and Geodesics on Smooth Manifolds
  • 2014
  • Ingår i: 8th ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC 2014; Venezia; Italy; 4 November 2014 through 7 November 2014. - New York, NY, USA : ACM. - 9781450329255 ; , s. Art. no. a20-
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses the problem of human activity recognition in still images. We propose a novel method that focuses on human-object interaction for feature representation of activities on Riemannian manifolds, and exploits underlying Riemannian geometry for classification. The main contributions of the paper include: (a) represent human activity by appearance features from local patches centered at hands containing interacting objects, and by structural features formed from the detected human skeleton containing the head, torso axis and hands; (b) formulate SVM kernel function based on geodesics on Riemannian manifolds under the log-Euclidean metric; (c) apply multi-class SVM classifier on the manifold under the one-against-all strategy. Experiments were conducted on a dataset containing 17196 images in 12 classes of activities from 4 subjects. Test results, evaluations, and comparisons with state-of-the-art methods provide support to the effectiveness of the proposed scheme.
  •  
30.
  • Yun, Yixiao, 1987, et al. (författare)
  • Visual Object Tracking with Online Learning on Riemannian Manifolds by One-Class Support Vector Machines
  • 2014
  • Ingår i: IEEE International Conference on Image Processing (ICIP 2014), Oct.27 - 30, 2014, Paris, France. - 9781479957514 ; , s. 1902-1906
  • Konferensbidrag (refereegranskat)abstract
    • This paper addresses issues in video object tracking. We propose a novel method where tracking is regarded as a one-class classification problem of domain-shift objects. The proposed tracker is inspired by the fact that the positive samples can be bounded by a closed hypersphere generated by one-class support vector machines (SVM), leading to a solution for robust learning of target model online. The main novelties of the paper include: (a) represent the target model by a set of positive samples as a cluster of points on Riemannian manifolds; (b) perform online learning of target model as a dynamic cluster of points flowing on the manifold, in an alternate manner with tracking; (c) formulate geodesic-based kernel function for one-class SVM on Riemannian manifolds under the log-Euclidean metric. Experiments are conducted on several videos, results have provided support to the proposed method.
  •  
31.
  • Zheng, Yan, et al. (författare)
  • Precursors and Pathways Leading to Enhanced Secondary Organic Aerosol Formation during Severe Haze Episodes
  • 2021
  • Ingår i: Environmental Science and Technology. - : American Chemical Society (ACS). - 0013-936X .- 1520-5851. ; 55:23, s. 15680-15693
  • Tidskriftsartikel (refereegranskat)abstract
    • Molecular analyses help to investigate the key precursors and chemical processes of secondary organic aerosol (SOA) formation. We obtained the sources and molecular compositions of organic aerosol in PM2.5 in winter in Beijing by online and offline mass spectrometer measurements. Photochemical and aqueous processing were both involved in producing SOA during the haze events. Aromatics, isoprene, long-chain alkanes or alkenes, and carbonyls such as glyoxal and methylglyoxal were all important precursors. The enhanced SOA formation during the severe haze event was predominantly contributed by aqueous processing that was promoted by elevated amounts of aerosol water for which multifunctional organic nitrates contributed the most followed by organic compounds having four oxygen atoms in their formulae. The latter included dicarboxylic acids and various oxidation products from isoprene and aromatics as well as products or oligomers from methylglyoxal aqueous uptake. Nitrated phenols, organosulfates, and methanesulfonic acid were also important SOA products but their contributions to the elevated SOA mass during the severe haze event were minor. Our results highlight the importance of reducing nitrogen oxides and nitrate for future SOA control. Additionally, the formation of highly oxygenated long-chain molecules with a low degree of unsaturation in polluted urban environments requires further research.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-31 av 31

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy