SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Zhang Zhen) "

Sökning: WFRF:(Zhang Zhen)

  • Resultat 1-10 av 353
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • 2019
  • Tidskriftsartikel (refereegranskat)
  •  
2.
  •  
3.
  •  
4.
  •  
5.
  •  
6.
  • Kristanl, Matej, et al. (författare)
  • The Seventh Visual Object Tracking VOT2019 Challenge Results
  • 2019
  • Ingår i: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE COMPUTER SOC. - 9781728150239 ; , s. 2206-2241
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
7.
  • Klionsky, Daniel J., et al. (författare)
  • Guidelines for the use and interpretation of assays for monitoring autophagy
  • 2012
  • Ingår i: Autophagy. - : Informa UK Limited. - 1554-8635 .- 1554-8627. ; 8:4, s. 445-544
  • Forskningsöversikt (refereegranskat)abstract
    • In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. A key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process vs. those that measure flux through the autophagy pathway (i.e., the complete process); thus, a block in macroautophagy that results in autophagosome accumulation needs to be differentiated from stimuli that result in increased autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field.
  •  
8.
  • Kristan, Matej, et al. (författare)
  • The Ninth Visual Object Tracking VOT2021 Challenge Results
  • 2021
  • Ingår i: 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2021). - : IEEE COMPUTER SOC. - 9781665401913 ; , s. 2711-2738
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2021 is the ninth annual tracker benchmarking activity organized by the VOT initiative. Results of 71 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in recent years. The VOT2021 challenge was composed of four sub-challenges focusing on different tracking domains: (i) VOT-ST2021 challenge focused on short-term tracking in RGB, (ii) VOT-RT2021 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2021 focused on long-term tracking, namely coping with target disappearance and reappearance and (iv) VOT-RGBD2021 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2021 dataset was refreshed, while VOT-RGBD2021 introduces a training dataset and sequestered dataset for winner identification. The source code for most of the trackers, the datasets, the evaluation kit and the results along with the source code for most trackers are publicly available at the challenge website(1).
  •  
9.
  • Kristan, Matej, et al. (författare)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • Ingår i: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
10.
  • Zhang, Zhixin, et al. (författare)
  • Vectorized rooftop area data for 90 cities in China
  • 2022
  • Ingår i: Scientific Data. - : Springer Nature. - 2052-4463. ; 9:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Reliable information on building rooftops is crucial for utilizing limited urban space effectively. In recent decades, the demand for accurate and up-to-date data on the areas of rooftops on a large-scale is increasing. However, obtaining these data is challenging due to the limited capability of conventional computer vision methods and the high cost of 3D modeling involving aerial photogrammetry. In this study, a geospatial artificial intelligence framework is presented to obtain data for rooftops using high-resolution open-access remote sensing imagery. This framework is used to generate vectorized data for rooftops in 90 cities in China. The data was validated on test samples of 180 km(2) across different regions with spatial resolution, overall accuracy, and F1 score of 1 m, 97.95%, and 83.11%, respectively. In addition, the generated rooftop area conforms to the urban morphological characteristics and reflects urbanization level. These results demonstrate that the generated dataset can be used for data support and decision-making that can facilitate sustainable urban development effectively.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 353
Typ av publikation
tidskriftsartikel (251)
patent (46)
konferensbidrag (34)
doktorsavhandling (10)
annan publikation (3)
proceedings (redaktörskap) (2)
visa fler...
forskningsöversikt (2)
bokkapitel (2)
rapport (1)
licentiatavhandling (1)
visa färre...
Typ av innehåll
refereegranskat (276)
populärvet., debatt m.m. (47)
övrigt vetenskapligt/konstnärligt (29)
Författare/redaktör
Zhang, Zhen, 1979- (90)
Zhang, Shi-Li (71)
Zhen, Zhang, 1979- (58)
Zhang, Zhen (48)
Zeng, Shuangshuang (27)
Wen, Chenyu (21)
visa fler...
Chen, Si, 1982- (20)
Lavoie, C. (19)
Hu, Qitao (18)
Solomon, Paul (15)
Östling, Mikael (14)
Yang, B (13)
Lind, Lars (13)
Chen, Xi (13)
Zhu, Y. (12)
Sun, Xiao-Feng (11)
Primetzhofer, Daniel (10)
Li, Yan (10)
Zhang, Zhen-Yu (10)
Evans, A. (9)
Boggia, José (9)
Hansen, Tine W (9)
Thijs, Lutgarde (9)
Ohkubo, Takayoshi (9)
Dolan, Eamon (9)
Imai, Yutaka (9)
Sandoya, Edgardo (9)
O'Brien, Eoin (9)
Poulter, Benjamin (9)
Ozcan, A (9)
Wang, Ji-Guang (9)
Asayama, Kei (9)
Stolarz-Skrzypek, Ka ... (9)
Casiglia, Edoardo (9)
Kawecka-Jaszcz, Kali ... (9)
Filipovsky, Jan (9)
Maestre, Gladys E. (9)
Narkiewicz, Krzyszto ... (9)
Yang, Wen-Yi (9)
Xu, L. (8)
Yang, Y. (8)
Yang, L. (8)
Hellström, Per-Erik (8)
Staessen, Jan A (8)
He, J (8)
James, K (8)
Hjort, Klas, 1964- (8)
Melgarejo, Jesus D. (8)
Guillorn, M. (8)
Lavoie, Christian (8)
visa färre...
Lärosäte
Uppsala universitet (207)
Linköpings universitet (41)
Kungliga Tekniska Högskolan (36)
Lunds universitet (34)
Stockholms universitet (22)
Umeå universitet (21)
visa fler...
Karolinska Institutet (21)
Chalmers tekniska högskola (15)
Göteborgs universitet (10)
Örebro universitet (9)
Högskolan i Skövde (8)
Sveriges Lantbruksuniversitet (4)
Luleå tekniska universitet (3)
Mälardalens universitet (3)
RISE (3)
Försvarshögskolan (2)
Högskolan i Halmstad (1)
Handelshögskolan i Stockholm (1)
Mittuniversitetet (1)
Linnéuniversitetet (1)
Blekinge Tekniska Högskola (1)
visa färre...
Språk
Engelska (352)
Odefinierat språk (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (122)
Teknik (121)
Medicin och hälsovetenskap (52)
Samhällsvetenskap (3)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy