SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Van De Weijer Joost) "

Sökning: WFRF:(Van De Weijer Joost)

  • Resultat 1-10 av 166
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Lenninger, Sara, et al. (författare)
  • Mirror, peephole and video - The role of contiguity in children's perception of reference in iconic signs
  • 2020
  • Ingår i: Frontiers in Psychology. - : Frontiers Media SA. - 1664-1078. ; 11
  • Tidskriftsartikel (refereegranskat)abstract
    • The present study looked at the extent to which 2-year-old children benefited from information conveyed by viewing a hiding event through an opening in a cardboard screen, seeing it as live video, as pre-recorded video, or by way of a mirror. Being encouraged to find the hidden object by selecting one out of two cups, the children successfully picked the baited cup significantly more often when they had viewed the hiding through the opening, or in live video, than when they viewed it in pre-recorded video, or by way of a mirror. All conditions rely on the perception of similarity. The study suggests, however, that contiguity – i.e., the perception of temporal and physical closeness between events – rather than similarity is the principal factor accounting for the results.
  •  
2.
  • van de Weijer, Jeroen, et al. (författare)
  • Gender identification in Chinese names
  • 2020
  • Ingår i: Lingua. - : Elsevier BV. - 0024-3841. ; 234
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper we discuss a number of factors that bear on the question if a Chinese given name is more likely to refer to a female or a male. In some cases this can be determined (with some degree of confidence) – in others it cannot. We identify the relevant factors as 1) gender-identifying characters or radicals; 2) sound symbolism and 3) reduplication. We consider the relations between these factors, and test our predictions in a psycholinguistic experiment with native speakers, for both written and spoken Chinese.
  •  
3.
  • van de Weijer, Joost, et al. (författare)
  • Proper-Name Identification
  • 2012
  • Ingår i: [Host publication title missing]. ; , s. 729-732
  • Konferensbidrag (refereegranskat)
  •  
4.
  • Kristanl, Matej, et al. (författare)
  • The Seventh Visual Object Tracking VOT2019 Challenge Results
  • 2019
  • Ingår i: 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW). - : IEEE COMPUTER SOC. - 9781728150239 ; , s. 2206-2241
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2019 is the seventh annual tracker benchmarking activity organized by the VOT initiative. Results of 81 trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis as well as the standard VOT methodology for long-term tracking analysis. The VOT2019 challenge was composed of five challenges focusing on different tracking domains: (i) VOT-ST2019 challenge focused on short-term tracking in RGB, (ii) VOT-RT2019 challenge focused on "real-time" short-term tracking in RGB, (iii) VOT-LT2019 focused on long-term tracking namely coping with target disappearance and reappearance. Two new challenges have been introduced: (iv) VOT-RGBT2019 challenge focused on short-term tracking in RGB and thermal imagery and (v) VOT-RGBD2019 challenge focused on long-term tracking in RGB and depth imagery. The VOT-ST2019, VOT-RT2019 and VOT-LT2019 datasets were refreshed while new datasets were introduced for VOT-RGBT2019 and VOT-RGBD2019. The VOT toolkit has been updated to support both standard short-term, long-term tracking and tracking with multi-channel imagery. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website(1).
  •  
5.
  • Kristan, Matej, et al. (författare)
  • The first visual object tracking segmentation VOTS2023 challenge results
  • 2023
  • Ingår i: 2023 IEEE/CVF International conference on computer vision workshops (ICCVW). - : Institute of Electrical and Electronics Engineers Inc.. - 9798350307443 - 9798350307450 ; , s. 1788-1810
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitting. New performance measures and evaluation protocols have been created along with a new toolkit and an evaluation server. Results of the presented 47 trackers indicate that modern tracking frameworks are well-suited to deal with convergence of short-term and long-term tracking and that multiple and single target tracking can be considered a single problem. A leaderboard, with participating trackers details, the source code, the datasets, and the evaluation kit are publicly available at the challenge website1
  •  
6.
  • Kristan, Matej, et al. (författare)
  • The Sixth Visual Object Tracking VOT2018 Challenge Results
  • 2019
  • Ingår i: Computer Vision – ECCV 2018 Workshops. - Cham : Springer Publishing Company. - 9783030110086 - 9783030110093 ; , s. 3-53
  • Konferensbidrag (refereegranskat)abstract
    • The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a “real-time” experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new long-term tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website (http://votchallenge.net).
  •  
7.
  • Ambrazaitis, Gilbert, et al. (författare)
  • Multimodal levels of prominence : a preliminary analysis of head and eyebrow movements in Swedish news broadcasts
  • 2015
  • Ingår i: Proceedings from Fonetik 2015 : Lund, June 8-10, 2015. Working Papers 55. 2015. - Lund, June 8-10, 2015. Working Papers 55. 2015.. - Lund : Centre for Languages and Literature, Lund University. - 0280-526X. ; 55, s. 11-16, s. 11-16
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • This paper presents a first analysis of the distribution of head and eyebrow movements as a function of (a) phonological prominence levels (focal, non-focal) and (b) word accent (Accent 1, Accent 2) in Swedish news broadcasts. Our corpus consists of 31 brief news readings, comprising speech from four speakers and 986 words in total. A head movement was annotated for 229 (23.2%) of the words, while eyebrow movements occurred much more sparsely (67 cases or 6.8%). Results of χ2-tests revealed a dependency of the distribution of movements on the one hand and focal accents on the other, while no systematic effect of the word accent type was found. However, there was an effect of the word accent type on the annotation of ‘double’ head movements. These occurred very sparsely, and predominantly in connection with focally accented compounds (Accent 2), which are characterized by two lexical stresses. Overall, our results suggests that head beats might have a closer association with phonological prosodic structure, while eyebrow movements might be more restricted to higher-level prominence and information-structure coding. Hence, head and eyebrow movements can represent two quite different modalities of prominence cuing, both from a formal and functional point of view, rather than just being cumulative prominence markers.
  •  
8.
  •  
9.
  • Anwer, Rao Muhammad, et al. (författare)
  • Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification
  • 2018
  • Ingår i: ISPRS journal of photogrammetry and remote sensing (Print). - : ELSEVIER SCIENCE BV. - 0924-2716 .- 1872-8235. ; 138, s. 74-85
  • Tidskriftsartikel (refereegranskat)abstract
    • Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification. (C) 2018 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
  •  
10.
  • Barratt, Daniel, et al. (författare)
  • Does the Kuleshov effect really exist? Revisiting a classic film experiment on facial expressions and emotional contexts
  • 2016
  • Ingår i: Perception. - : SAGE Publications. - 0301-0066 .- 1468-4233. ; 45:8, s. 847-874
  • Tidskriftsartikel (refereegranskat)abstract
    • According to film mythology, the Soviet filmmaker Lev Kuleshov conducted an experiment in which he combined a close-up of an actor’s neutral face with three different emotional contexts: happiness, sadness, and hunger. The viewers of the three film sequences reportedly perceived the actor’s face as expressing an emotion congruent with the given context. It is not clear, however, whether or not the so-called “Kuleshov effect” really exists. The original film footage is lost and recent attempts at replication have produced either conflicting or unreliable results. The current paper describes an attempt to replicate Kuleshov’s original experiment using an improved experimental design. In a behavioral and eye tracking study, 36 participants were each presented with 24 film sequences of neutral faces across six emotional conditions. For each film sequence, the participants were asked to evaluate the emotion of the target person in terms of valence, arousal, and category. The participants’ eye movements were recorded throughout. The results suggest that some sort of Kuleshov effect does in fact exist. For each emotional condition, the participants tended to choose the appropriate category more frequently than the alternative options, while the answers to the valence and arousal questions also went in the expected directions. The eye tracking data showed how the participants attended to different regions of the target person’s face (in light of the intermediate context), but did not reveal the expected differences between the emotional conditions.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 166
Typ av publikation
konferensbidrag (80)
tidskriftsartikel (72)
bokkapitel (8)
bok (2)
proceedings (redaktörskap) (2)
annan publikation (2)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (153)
övrigt vetenskapligt/konstnärligt (13)
Författare/redaktör
van de Weijer, Joost (165)
Paradis, Carita (23)
Zlatev, Jordan (21)
Sahlén, Birgitta (20)
Johansson, Victoria (13)
Grenner, Emily (13)
visa fler...
Khan, Fahad (11)
Åkerlund, Viktoria (11)
Schötz, Susanne (10)
Felsberg, Michael (10)
Andersson, Richard (8)
Asker-Árnason, Lena (8)
Farshchi, Sara (8)
Kupisch, Tanja (7)
Khan, Fahad Shahbaz (7)
Nyström, Marcus (6)
Ambrazaitis, Gilbert (6)
Svensson Lundmark, M ... (6)
Danelljan, Martin (6)
Andersson, Annika, 1 ... (5)
Laaksonen, Jorma (5)
Gargiulo, Chiara (5)
Eklund, Robert, 1962 ... (4)
Holmqvist, Kenneth (4)
Löhndorf, Simone (4)
Lindgren, Magnus (4)
Johansson, Victoria, ... (4)
Blomberg, Johan (4)
Khan, Fahad Shahbaz, ... (4)
Żywiczyński, Przemys ... (4)
Matas, Jiri (4)
Fernandez, Gustavo (4)
Wang, Dong (3)
Persson, Tomas (3)
Carling, Gerd (3)
Hansson, Kristina (3)
Sandgren, Olof (3)
Sonesson, Göran (3)
Granfelt, Jonas (3)
Li, Bo (3)
Bhat, Goutam (3)
Bianchi, Ivana (3)
Colonna Dahlman, Rob ... (3)
Devylder, Simon (3)
Yang, Ming-Hsuan (3)
Einfeldt, Marieke (3)
Kristan, Matej (3)
Leonardis, Ales (3)
Lukezic, Alan (3)
Ågren, Malin (3)
visa färre...
Lärosäte
Lunds universitet (133)
Linköpings universitet (28)
Linnéuniversitetet (9)
Högskolan Kristianstad (5)
Göteborgs universitet (4)
Stockholms universitet (2)
visa fler...
Umeå universitet (1)
visa färre...
Språk
Engelska (165)
Svenska (1)
Forskningsämne (UKÄ/SCB)
Humaniora (123)
Samhällsvetenskap (23)
Naturvetenskap (22)
Medicin och hälsovetenskap (11)
Teknik (7)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy