SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Huang Weiming) "

Sökning: WFRF:(Huang Weiming)

  • Resultat 1-10 av 29
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2017 challenge results
  • 2017
  • Ingår i: 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017). - : IEEE. - 9781538610343 ; , s. 1949-1972
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2017 is the fifth annual tracker benchmarking activity organized by the VOT initiative. Results of 51 trackers are presented; many are state-of-the-art published at major computer vision conferences or journals in recent years. The evaluation included the standard VOT and other popular methodologies and a new "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The VOT2017 goes beyond its predecessors by (i) improving the VOT public dataset and introducing a separate VOT2017 sequestered dataset, (ii) introducing a realtime tracking experiment and (iii) releasing a redesigned toolkit that supports complex experiments. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
2.
  • Kristan, Matej, et al. (författare)
  • The Visual Object Tracking VOT2015 challenge results
  • 2015
  • Ingår i: Proceedings 2015 IEEE International Conference on Computer Vision Workshops ICCVW 2015. - : IEEE. - 9780769557205 ; , s. 564-586
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website(1).
  •  
3.
  • Kristanl, Matej, et al. (författare)
  • The Seventh Visual Object Tracking VOT2019 Challenge Results
  • 2019
  • Ingår i: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE COMPUTER SOC. - 9781728150239 ; , s. 2206-2241
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
4.
  • Feng, Wenqing, et al. (författare)
  • A novel change detection approach based on visual saliency and random forest from multi-temporal high-resolution remote-sensing images
  • 2018
  • Ingår i: International Journal of Remote Sensing. - : Informa UK Limited. - 0143-1161 .- 1366-5901. ; 39:22, s. 7998-8021
  • Tidskriftsartikel (refereegranskat)abstract
    • This article presents a novel change detection (CD) approach for high-resolution remote-sensing images, which incorporates visual saliency and random forest (RF). First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis. Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for super-pixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, super-pixel-based CD is implemented by applying RF based on these samples. Experimental results on Quickbird, Ziyuan 3 (ZY3), and Gaofen 2 (GF2) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
  •  
5.
  • Feng, Wenqing, et al. (författare)
  • A novel change detection approach for multi-temporal high-resolution remote sensing images based on rotation forest and coarse-to-fine uncertainty analyses
  • 2018
  • Ingår i: Remote Sensing. - : MDPI AG. - 2072-4292. ; 10:7
  • Tidskriftsartikel (refereegranskat)abstract
    • In the process of object-based change detection (OBCD), scale is a significant factor related to extraction and analyses of subsequent change data. To address this problem, this paper describes an object-based approach to urban area change detection (CD) using rotation forest (RoF) and coarse-to-fine uncertainty analyses of multi-temporal high-resolution remote sensing images. First, highly homogeneous objects with consistent spatial positions are identified through vector-raster integration and multi-scale fine segmentation. The multi-temporal images are stacked and segmented under the constraints of a historical land use vector map using a series of optimal segmentation scales, ranging from coarse to fine. Second, neighborhood correlation image analyses are performed to highlight pixels with high probabilities of being changed or unchanged, which can be used as a prerequisite for object-based analyses. Third, based on the coarse-to-fine segmentation and pixel-based pre-classification results, change possibilities are calculated for various objects. Furthermore, changed and unchanged objects identified at different scales are automatically selected to serve as training samples. The spectral and texture features of each object are extracted. Finally, uncertain objects are classified using the RoF classifier. Multi-scale classification results are combined using a majority voting rule to generate the final CD results. In experiments using two pairs of real high-resolution remote sensing datasets, our proposed approach outperformed existing methods in terms of CD accuracy, verifying its feasibility and effectiveness.
  •  
6.
  • Feng, Wenqing, et al. (författare)
  • Building extraction from VHR remote sensing imagery by combining an improved deep convolutional encoder-decoder architecture and historical land use vector map
  • 2020
  • Ingår i: International Journal of Remote Sensing. - : Informa UK Limited. - 0143-1161 .- 1366-5901. ; 41:17, s. 6595-6617
  • Tidskriftsartikel (refereegranskat)abstract
    • Building extraction has attracted considerable attention in the field of remote sensing image analysis. Fully convolutional network modelling is a recently developed technique that is capable of significantly enhancing building extraction accuracy. It is a prominent branch of deep learning and uses advanced state-of-the-art techniques, especially with regard to building segmentation. In this paper, we present an enhanced deep convolutional encoder-decoder (DCED) network by incorporating historical land use vector maps (HVMs) customized for building extraction. The approach combines enhanced DCED architecture with multi-scale image pyramid for pixel-wise building segmentation. The improved DCED network, together with symmetrical dense-shortcut connection structures, is employed to establish the encoders for automatic extraction of building features. The feature maps from early layers were fused with more discriminative feature maps from the deeper layers through ‘Res path’ skip connections for superior building extraction accuracy. To further reduce the occurrence of falsely segmented buildings, and to sharpen the buildings’ boundaries, the new temporal testing image is segmented under the constraints of an HVM. A majority voting strategy is employed to ensure the homogeneity of the building objects as the post-processing method. Experimental results indicate that the proposed approach exhibits competitive quantitative and qualitative performance, effectively alleviating the salt-and-pepper phenomenon and block effects, and retaining the edge structures of buildings. Compared with other state-of-the-art methods, our method demonstrably achieves the optimal final accuracies.
  •  
7.
  • Feng, Wenqing, et al. (författare)
  • Change Detection Method for High Resolution Remote Sensing Images Using Random Forest
  • 2017
  • Ingår i: Cehui Xuebao/Acta Geodaetica et Cartographica Sinica. - 1001-1595. ; 46:11, s. 1880-1890
  • Tidskriftsartikel (refereegranskat)abstract
    • Studies based on object-based image analysis (OBIA) representing the paradigm shift in remote sensing image change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. This paper presents a novel RF OBIA method for high resolution remote sensing image CD that makes full use of the advantages of RF and OBIA. Firstly, the entropy rate segmentation algorithm is used to segment the image for the purpose of measuring the homogeneity of super-pixels. Then the optimal image segmentation result is obtained from the evaluation index of the optimal super-pixel number. Afterwards, the spectral features and Gabor features of each super-pixelareextracted and used as feature datasets for the training of RF model. On the basis of the initial pixel-level CD result, the changed and unchanged samples are automatically selected and used to build the classifier model in order to get the final object-level CD result. Experimental results on Quickbird, IKONOS and SPOT-5 multi-spectral images show that the proposed method out performs the compared methods in the accuracy of CD.
  •  
8.
  • Feng, Wenqing, et al. (författare)
  • Water Body Extraction From Very High-Resolution Remote Sensing Imagery Using Deep U-Net and a Superpixel-Based Conditional Random Field Model
  • 2019
  • Ingår i: IEEE Geoscience and Remote Sensing Letters. - 1545-598X. ; 16:4, s. 618-622
  • Tidskriftsartikel (refereegranskat)abstract
    • Water body extraction (WBE) has attracted considerable attention in the field of remote sensing image analysis. Herein, we present an enhanced deep convolutional encoder-decoder (DCED) network (or Deep U-Net) specifically tailored to WBE from remote sensing images by applying superpixel segmentation and conditional random fields (CRFs). First, we preclassify the entire remote sensing image into the water and nonwater areas via Deep U-Net, using the results of class membership probabilities as the unary potential in the CRF model. The pairwise potential of CRF is defined by a linear combination of Gaussian kernels, which forms a fully connected neighbor structure. Next, regional restriction is incorporated into the approach to enhance the consistency of the connected area. We use the simple linear iterative clustering algorithm to generate superpixels and correct the binary classification results by calculating their average posterior probabilities. Finally, a highly efficient approximate inference algorithm, mean-field inference, is generated for the final model. The results from the experimental application to GaoFen-2 images and WorldView-2 images demonstrate that the proposed approach exhibits competitive quantitative and qualitative performance, which effectively reduces salt-and-pepper noise and retains the edge structures of water bodies. Compared to existing state-of-the-art methods, our proposed method achieves superior final results.
  •  
9.
  • Harrie, Lars, et al. (författare)
  • Using BIM Data Together with City Models
  • 2021
  • Ingår i: GIM International-The Worldwide Magazine for Geomatics. - : Reed Business-GEO. - 1566-9076. ; 35:7, s. 27-29
  • Tidskriftsartikel (refereegranskat)abstract
    • An increasing number of cities are creating 3D city models to support visualization and simulations in the urban planning process. The 3D city models are often extended with planned buildings. One way to facilitate this is to add simplified building information modelling (BIM) models of the planned buildings to the 3D city model. This article summarizes some of the recent academic and industrial studies of this topic.
  •  
10.
  • Harrie, Lars, et al. (författare)
  • Using BIM Data Together with City Models : Exploring the Opportunities and Challenges
  • 2021
  • Ingår i: GIM international. - 1566-9076. ; 2021:7, s. 27-27
  • Tidskriftsartikel (populärvet., debatt m.m.)abstract
    • An increasing number of cities are creating 3D city models to support visualization and simulations in the urban planning process. The 3D city models are often extended with planned buildings. One way to facilitate this is to add simplified building information modelling (BIM) models of the planned buildings to the 3D city model. This article summarizes some of the recent academic and industrial studies of this topic.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 29
Typ av publikation
tidskriftsartikel (16)
konferensbidrag (11)
doktorsavhandling (1)
forskningsöversikt (1)
Typ av innehåll
refereegranskat (27)
övrigt vetenskapligt/konstnärligt (1)
populärvet., debatt m.m. (1)
Författare/redaktör
Harrie, Lars (10)
Matas, Jiri (6)
Leonardis, Ales (6)
Fernandez, Gustavo (6)
Pflugfelder, Roman (6)
Bowden, Richard (5)
visa fler...
Kristan, Matej (5)
Vojır, Tomas (5)
Lukezic, Alan (5)
Bertinetto, Luca (5)
Petrosino, Alfredo (5)
Mansourian, Ali (4)
Li, Yang (4)
Felsberg, Michael (4)
Torr, Philip H.S. (4)
Danelljan, Martin (4)
Gao, Jin (4)
Zhu, Jianke (4)
Martinez, Jose M. (4)
Miksik, Ondrej (4)
Martin-Nieto, Rafael (4)
Golodetz, Stuart (4)
Lebeda, Karel (4)
Khan, Fahad (3)
Wang, Qiang (3)
Li, Xin (3)
Mishra, Deepak (3)
Li, Bo (3)
Häger, Gustav (3)
Bhat, Goutam (3)
Zhao, Fei (3)
Tang, Ming (3)
Yang, Ming-Hsuan (3)
Eldesokey, Abdelrahm ... (3)
Cehovin, Luka (3)
Du, Dawei (3)
Porikli, Fatih (3)
Jeong, Jae-chan (3)
Cho, Jae-il (3)
Kim, Ji-Wan (3)
Wen, Longyin (3)
Lyu, Siwei (3)
Choi, Sunglok (3)
Garcia-Martin, Alvar ... (3)
Varfolomieiev, Anton (3)
Battistone, Francesc ... (3)
Seetharaman, Guna (3)
Possegger, Horst (3)
Valmadre, Jack (3)
Palaniappan, Kannapp ... (3)
visa färre...
Lärosäte
Lunds universitet (18)
Linköpings universitet (6)
Kungliga Tekniska Högskolan (4)
Stockholms universitet (1)
Linnéuniversitetet (1)
Språk
Engelska (29)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (26)
Teknik (5)
Medicin och hälsovetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy