SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(van de Weijer Joost) srt2:(2010-2014)"

Search: WFRF:(van de Weijer Joost) > (2010-2014)

  • Result 1-25 of 40
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • van de Weijer, Joost, et al. (author)
  • Proper-Name Identification
  • 2012
  • In: [Host publication title missing]. ; , s. 729-732
  • Conference paper (peer-reviewed)
  •  
2.
  •  
3.
  • Carling, Gerd, et al. (author)
  • Scandoromani : Remnants of a mixed language
  • 2014
  • Book (peer-reviewed)abstract
    • Scandoromani: Remnants of a Mixed Language is the first, comprehensive, international description of the language of the Swedish and Norwegian Romano, also labeled resande/reisende. The language, an official minority language in Sweden and Norway, has a history in Scandinavia going back to the early 16th century. A mixed language of Romani and Scandinavian, it is spoken today by a vanishingly small population of mainly elderly people.This book is based on in-depth linguistic interviews with two native speakers of different families (one of whom is the co-author) as well as reviews of earlier sources on Scandoromani. The study reveals a number of interesting features of the language, as well as of mixed languages in general. In particular, the study gives support to the model of autonomy of mixed languages.
  •  
4.
  • Danelljan, Martin, et al. (author)
  • Adaptive Color Attributes for Real-Time Visual Tracking
  • 2014
  • In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2014. - : IEEE Computer Society. - 9781479951178 ; , s. 1090-1097
  • Conference paper (peer-reviewed)abstract
    • Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second.
  •  
5.
  •  
6.
  •  
7.
  •  
8.
  • Holmqvist, Kenneth, et al. (author)
  • Eye Tracking : A Comprehensive Guide to Methods and Measures
  • 2011
  • Book (other academic/artistic)abstract
    • This book is written by and for researchers who are still in that part of their careers where they are actively using the eye-tracker as a tool; those who have to deal with the technology, the signals, the filters, the algorithms, the experimental design, the programming of stimulus presentation, instructions to participants, working the varying tools for data analysis, and of course, worrying about all the different things that must not go wrong! A central theme of the book concerns the wide range of fields eye tracking covers. Suppose an educational psychologist wishes to use eye tracking to evaluate a new software pack- age designed to support learning to read. She may have an excellent idea as a starting point, and some understanding of the kind of results eye tracking could provide to tackle her re- search question, but unless she and the group around her are also adept in computer science, it is unlikely she will know how the eye movement data she collects is generated: How raw data samples are converted into fixations and saccades using event detection algorithms, how the different representations of eye movement data are calculated, and how all the measures of eye movements relate to these processes. All this is important because subtleties involved in working with eye-tracking data can have large consequences for the final results, and thus whether our educational psychologist can confidently conclude that her software package is effective or not in supporting the development of reading skills. This is not to say that hard-core computer science skills are the crux of good eye-tracking research, for this is certainly not the case. One can equally envisage a situation where an expert in programming and the manipulation of data plans and executes an eye-tracking study poorly, simply because she is not trained in the principles of experimental design, and the associated literature on the visual system and oculomotor control. There are many contrasts between the diverging schools of thought which use eye track- ing; practices and preferences vary, but certainly experts in different fields do not draw on each other’s strengths enough. We felt there was a need to pinpoint the relative merits of adopting methods based in one field alone, whilst highlighting that the lack of synergy be- tween different disciplines can lead to sub-optimal research practices, and new advancements being overlooked. Besides technical details and theory, however, the heart of this book revolves around practicality. At the Humanities Laboratory at Lund University we have been teaching eye- tracking methodology regularly since 2000. We commonly see newcomers to the technique run aground when encountering just the sort of issues raised above, but beginners struggle with problems which are even more practical in nature. Hands-on advice for how to actually use eye-trackers is very limited. Setting up the eye camera and performing a good calibration routine is just as important as the design of the study and how data is handled, for if the recording is poor your options are limited from the outset. There are fundamental methodological skills which underpin using eye-trackers, but at the other end of the spectrum there is also the vast choice of measures available to the eye-tracking researcher. For the present text to be complete, therefore, we felt a require- ment should also be to draw together eye-tracking measures, as well as methods, into an understandable structure. So, starting around 2005, we began producing a taxonomy of all eye-movement methods and measures used by researchers, examining how the measures are related to each other, what type of data quality they rely on, and previous data processing they require. Our classification work thus consisted of searching the method sections from thousands of journal papers, book chapters, PhD theses and conference proceedings. Every measure and method we found was catalogued and put into a growing system. Some of the measures were extremely elusive, as they are known by different names, not only between research fields, but even within, and often the precise implementations are missing in the WHY WE WROTE THIS BOOK | v vi | WHY WE WROTE THIS BOOK published texts. At first, we were very unclear how to classify measures. Some varieties of taxonomic structures that we rejected can be found on p. 463. We ended up with a classifica- tion structure where the operational definitions are at the centre. Users of eye-trackers often lack proficient training because there is little or no teaching community to rely on. As a result people are often self taught, or depend on second-hand knowledge which may be out of date or even incorrect. When they participate in our eye- tracking methodology courses, we find that many new users are very focused on their re- search questions, but are surprised how much time they need to invest in order to master eye tracking properly. Often people attending have just purchased an eye-tracker to compliment their research, or for use in their company to tackle ergonomic and marketing-related ques- tions. Our aim for this book is to make learning to use eye-trackers a much easier process for these readers. If you have a solid background in experimental psychology, computer sci- ence, or mathematics you will often find it straightforward to embrace the technologies and workflows surrounding eye tracking. But whatever your background, you should be able to achieve the same level of knowledge and understanding from this book as you would from training on eye tracking in-house in a fully competent laboratory. More specifically, this book has been written to be a support when: 1. Evaluating or acquiring a commercial eye-tracker, 2. Planning an experiment where eye tracking is used as a tool, 3. About to record eye-movement data, 4. Planning how to process and interpret the recorded data, before carrying out statistical tests on it, 5. Reading or reviewing eye-movement research. In our efforts to classify eye-tracking methods and measures, combined with useful prac- tical hints and tips, we hope to provide the reader with the first comprehensive textbook on methodology for new users of eye tracking, but which also caters for the advanced researcher. Previous versions of this book have been used in eye-tracking education in Lund. Also, col- leagues of ours in Potsdam, Tübingen, and Helsinki have used earlier manuscripts of the book when teaching and training masters and PhD level students in eye tracking. Lastly, although not the target audience, manufacturers have already shown a great interest in the book at the manuscript stage, which we hope may lead to even better eye-trackers in the future.
  •  
9.
  •  
10.
  •  
11.
  • Khan, Fahad Shahbaz, et al. (author)
  • Color Attributes for Object Detection
  • 2012
  • In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2012. - : IEEE. - 9781467312271 - 9781467312264 ; , s. 3306-3313
  • Conference paper (peer-reviewed)abstract
    • State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification, leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape. In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-of-the-art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
  •  
12.
  • Khan, Fahad Shahbaz, 1983-, et al. (author)
  • Coloring Action Recognition in Still Images
  • 2013
  • In: International Journal of Computer Vision. - : Springer Science and Business Media LLC. - 0920-5691 .- 1573-1405. ; 105:3, s. 205-221
  • Journal article (peer-reviewed)abstract
    • In this article we investigate the problem of human action recognition in static images. By action recognition we intend a class of problems which includes both action classification and action detection (i.e. simultaneous localization and classification). Bag-of-words image representations yield promising results for action classification, and deformable part models perform very well object detection. The representations for action recognition typically use only shape cues and ignore color information. Inspired by the recent success of color in image classification and object detection, we investigate the potential of color for action classification and detection in static images. We perform a comprehensive evaluation of color descriptors and fusion approaches for action recognition. Experiments were conducted on the three datasets most used for benchmarking action recognition in still images: Willow, PASCAL VOC 2010 and Stanford-40. Our experiments demonstrate that incorporating color information considerably improves recognition performance, and that a descriptor based on color names outperforms pure color descriptors. Our experiments demonstrate that late fusion of color and shape information outperforms other approaches on action recognition. Finally, we show that the different color–shape fusion approaches result in complementary information and combining them yields state-of-the-art performance for action classification.
  •  
13.
  • Khan, Fahad Shahbaz, et al. (author)
  • Evaluating the Impact of Color on Texture Recognition
  • 2013
  • In: Computer Analysis of Images and Patterns. - Berlin, Heidelberg : Springer Berlin/Heidelberg. - 9783642402609 - 9783642402616 ; , s. 154-162
  • Conference paper (peer-reviewed)abstract
    • State-of-the-art texture descriptors typically operate on grey scale images while ignoring color information. A common way to obtain a joint color-texture representation is to combine the two visual cues at the pixel level. However, such an approach provides sub-optimal results for texture categorisation task.In this paper we investigate how to optimally exploit color information for texture recognition. We evaluate a variety of color descriptors, popular in image classification, for texture categorisation. In addition we analyze different fusion approaches to combine color and texture cues. Experiments are conducted on the challenging scenes and 10 class texture datasets. Our experiments clearly suggest that in all cases color names provide the best performance. Late fusion is the best strategy to combine color and texture. By selecting the best color descriptor with optimal fusion strategy provides a gain of 5% to 8% compared to texture alone on scenes and texture datasets.
  •  
14.
  • Khan, Fahad Shahbaz, et al. (author)
  • Painting-91 : a large scale database for computational painting categorization
  • 2014
  • In: Machine Vision and Applications. - : Springer Berlin/Heidelberg. - 0932-8092 .- 1432-1769. ; 25:6, s. 1385-1397
  • Journal article (peer-reviewed)abstract
    • Computer analysis of visual art, especially paintings, is an interesting cross-disciplinary research domain. Most of the research in the analysis of paintings involve medium to small range datasets with own specific settings. Interestingly, significant progress has been made in the field of object and scene recognition lately. A key factor in this success is the introduction and availability of benchmark datasets for evaluation. Surprisingly, such a benchmark setup is still missing in the area of computational painting categorization. In this work, we propose a novel large scale dataset of digital paintings. The dataset consists of paintings from 91 different painters. We further show three applications of our dataset namely: artist categorization, style classification and saliency detection. We investigate how local and global features popular in image classification perform for the tasks of artist and style categorization. For both categorization tasks, our experimental results suggest that combining multiple features significantly improves the final performance. We show that state-of-the-art computer vision methods can correctly classify 50 % of unseen paintings to its painter in a large dataset and correctly attribute its artistic style in over 60 % of the cases. Additionally, we explore the task of saliency detection on paintings and show experimental findings using state-of-the-art saliency estimation algorithms.
  •  
15.
  • Khan, Fahad, et al. (author)
  • Semantic Pyramids for Gender and Action Recognition
  • 2014
  • In: IEEE Transactions on Image Processing. - : Institute of Electrical and Electronics Engineers (IEEE). - 1057-7149 .- 1941-0042. ; 23:8, s. 3633-3645
  • Journal article (peer-reviewed)abstract
    • Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition.
  •  
16.
  • Khan, Rahat, et al. (author)
  • Discriminative Color Descriptors
  • 2013
  • In: Computer Vision and Pattern Recognition (CVPR), 2013. - : IEEE Computer Society. ; , s. 2866-2873
  • Conference paper (peer-reviewed)abstract
    • Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.
  •  
17.
  •  
18.
  •  
19.
  •  
20.
  • Nyström, Marcus, et al. (author)
  • The influence of calibration method and eye physiology on eyetracking data quality
  • 2013
  • In: Behavior Research Methods. - : Springer Science and Business Media LLC. - 1554-3528. ; 45:1, s. 272-288
  • Journal article (peer-reviewed)abstract
    • Abstract in UndeterminedRecording eye movement data with high quality is often a prerequisite for producing valid and replicable results and for drawing well-founded conclusions about the oculomotor system. Today, many aspects of data quality are often informally discussed among researchers but are very seldom measured, quantified, and reported. Here we systematically investigated how the calibration method, aspects of participants' eye physiologies, the influences of recording time and gaze direction, and the experience of operators affect the quality of data recorded with a common tower-mounted, video-based eyetracker. We quantified accuracy, precision, and the amount of valid data, and found an increase in data quality when the participant indicated that he or she was looking at a calibration target, as compared to leaving this decision to the operator or the eyetracker software. Moreover, our results provide statistical evidence of how factors such as glasses, contact lenses, eye color, eyelashes, and mascara influence data quality. This method and the results provide eye movement researchers with an understanding of what is required to record high-quality data, as well as providing manufacturers with the knowledge to build better eyetrackers.
  •  
21.
  •  
22.
  • Paradis, Carita, et al. (author)
  • Evaluative polarity of antonyms
  • 2012
  • In: Lingue e Linguaggio. - 1720-9331. ; 11, s. 199-214
  • Journal article (peer-reviewed)
  •  
23.
  •  
24.
  •  
25.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-25 of 40
Type of publication
conference paper (23)
journal article (13)
book (2)
book chapter (2)
Type of content
peer-reviewed (38)
other academic/artistic (2)
Author/Editor
van de Weijer, Joost (40)
Paradis, Carita (9)
Sahlén, Birgitta (9)
Andersson, Richard (7)
Åkerlund, Viktoria (6)
Felsberg, Michael (5)
show more...
Nyström, Marcus (4)
Johansson, Victoria (4)
Lindgren, Magnus (4)
Khan, Fahad Shahbaz (4)
Holmqvist, Kenneth (3)
Löhndorf, Simone (3)
Hansson, Kristina (3)
Sandgren, Olof (3)
Asker-Árnason, Lena (3)
Grenner, Emily (3)
Zlatev, Jordan (2)
Persson, Tomas (2)
Carling, Gerd (2)
Schötz, Susanne (2)
Johansson, Victoria, ... (2)
Sonesson, Göran (2)
Ågren, Malin (2)
Lenninger, Sara (2)
Khan, Fahad (1)
Saxena, Anju (1)
Sayehli, Susan (1)
Hoppe, Anja (1)
Ali, Sadiq (1)
Ambrazaitis, Gilbert (1)
Holmer, Arthur (1)
Johansson, Niklas (1)
Borin, Lars (1)
Anwer, Rao Muhammad (1)
Heldner, Mattias (1)
Khan, Fahad Shahbaz, ... (1)
Danelljan, Martin (1)
Dewhurst, Richard (1)
Campbell, Nick (1)
Eriksen, Love (1)
Kuckovic, Edin (1)
Lindell, Lenny (1)
Smith, Viktor (1)
Glynn, Dylan (1)
Shahbaz Khan, Fahad (1)
Divjak, Dagmar (1)
Lopez, Antonio (1)
Braaksma, Martine (1)
Halszka, Jarodzka (1)
Gibbon, Dafydd (1)
show less...
University
Lund University (31)
Linköping University (7)
Kristianstad University College (2)
Linnaeus University (1)
Language
English (40)
Research subject (UKÄ/SCB)
Humanities (26)
Social Sciences (8)
Medical and Health Sciences (5)
Natural sciences (4)
Engineering and Technology (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view