SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Kjellström Hedvig) "

Sökning: WFRF:(Kjellström Hedvig)

  • Resultat 1-10 av 117
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ahlberg, Simon, et al. (författare)
  • An information fusion demonstrator for tactical intelligence processing in network-based defense
  • 2007
  • Ingår i: Information Fusion. - : Elsevier BV. - 1566-2535 .- 1872-6305. ; 8:1, s. 84-107
  • Tidskriftsartikel (refereegranskat)abstract
    • The Swedish Defence Research Agency (FOI) has developed a concept demonstrator called the Information Fusion Demonstrator 2003 (IFD03) for demonstrating information fusion methodology suitable for a future Network Based Defense (NBD) C4ISR system. The focus of the demonstrator is on real-time tactical intelligence processing at the division level in a ground warfare scenario. The demonstrator integrates novel force aggregation, particle filtering, and sensor allocation methods to create, dynamically update, and maintain components of a tactical situation picture. This is achieved by fusing physically modelled and numerically simulated sensor reports from several different sensor types with realistic a priori information sampled from both a high-resolution terrain model and an enemy organizational and behavioral model. This represents a key step toward the goal of creating in real time a dynamic, high fidelity representation of a moving battalion-sized organization, based on sensor data as well as a priori intelligence and terrain information, employing fusion, tracking, aggregation, and resource allocation methods all built on well-founded theories of uncertainty. The motives behind this project, the fusion methods developed for the system, as well as its scenario model and simulator architecture are described. The main services of the demonstrator are discussed and early experience from using the system is shared.
  •  
2.
  • Broomé, Sofia, et al. (författare)
  • Dynamics are important for the recognition of equine pain in video
  • 2019
  • Ingår i: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. - : Institute of Electrical and Electronics Engineers (IEEE). - 1063-6919.
  • Konferensbidrag (refereegranskat)abstract
    • A prerequisite to successfully alleviate pain in animals is to recognize it, which is a great challenge in non-verbal species. Furthermore, prey animals such as horses tend to hide their pain. In this study, we propose a deep recurrent two-stream architecture for the task of distinguishing pain from non-pain in videos of horses. Different models are evaluated on a unique dataset showing horses under controlled trials with moderate pain induction, which has been presented in earlier work. Sequential models are experimentally compared to single-frame models, showing the importance of the temporal dimension of the data, and are benchmarked against a veterinary expert classification of the data. We additionally perform baseline comparisons with generalized versions of state-of-the-art human pain recognition methods. While equine pain detection in machine learning is a novel field, our results surpass veterinary expert performance and outperform pain detection results reported for other larger non-human species. 
  •  
3.
  • Broomé, Sara, et al. (författare)
  • Going Deeper than Tracking : A Survey of Computer-Vision Based Recognition of Animal Pain and Emotions
  • 2023
  • Ingår i: International Journal of Computer Vision. - : Springer Nature. - 0920-5691 .- 1573-1405. ; 131:2, s. 572-590
  • Tidskriftsartikel (refereegranskat)abstract
    • Advances in animal motion tracking and pose recognition have been a game changer in the study of animal behavior. Recently, an increasing number of works go ‘deeper’ than tracking, and address automated recognition of animals’ internal states such as emotions and pain with the aim of improving animal welfare, making this a timely moment for a systematization of the field. This paper provides a comprehensive survey of computer vision-based research on recognition of pain and emotional states in animals, addressing both facial and bodily behavior analysis. We summarize the efforts that have been presented so far within this topic—classifying them across different dimensions, highlight challenges and research gaps, and provide best practice recommendations for advancing the field, and some future directions for research. 
  •  
4.
  • Broomé, Sofia, 1990- (författare)
  • Learning Spatiotemporal Features in Low-Data and Fine-Grained Action Recognition with an Application to Equine Pain Behavior
  • 2022
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Recognition of pain in animals is important because pain compromises animal welfare and can be a manifestation of disease. This is a difficult task for veterinarians and caretakers, partly because horses, being prey animals, display subtle pain behavior, and because they cannot verbalize their pain. An automated video-based system has a large potential to improve the consistency and efficiency of pain predictions.Video recording is desirable for ethological studies because it interferes minimally with the animal, in contrast to more invasive measurement techniques, such as accelerometers. Moreover, to be able to say something meaningful about animal behavior, the subject needs to be studied for longer than the exposure of single images. In deep learning, we have not come as far for video as we have for single images, and even more questions remain regarding what types of architectures should be used and what these models are actually learning. Collecting video data with controlled moderate pain labels is both laborious and involves real animals, and the amount of such data should therefore be limited. The low-data scenario, in particular, is under-explored in action recognition, in favor of the ongoing exploration of how well large models can learn large datasets.The first theme of the thesis is automated recognition of equine pain. Here, we propose a method for end-to-end equine pain recognition from video, finding, in particular, that the temporal modeling ability of the artificial neural network is important to improve the classification. We surpass veterinarian experts on a dataset with horses undergoing well-defined moderate experimental pain induction.  Next, we investigate domain transfer to another type of pain in horses: less defined, longer-acting and lower-grade orthopedic pain. We find that a smaller, recurrent video model is more robust to domain shift on a target dataset than a large, pre-trained, 3D CNN, having equal performance on a source dataset. We also discuss challenges with learning video features on real-world datasets.Motivated by questions arisen within the application area, the second theme of the thesis is empirical properties of deep video models. Here, we study the spatiotemporal features that are learned by deep video models in end-to-end video classification and propose an explainability method as a tool for such investigations. Further, the question of whether different approaches to frame dependency treatment in video models affect their cross-domain generalization ability is explored through empirical study. We also propose new datasets for light-weight temporal modeling and to investigate texture bias within action recognition.
  •  
5.
  • Broomé, Sofia, et al. (författare)
  • Recur, Attend or Convolve? : On Whether Temporal Modeling Matters for Cross-Domain Robustness in Action Recognition
  • 2023
  • Ingår i: 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 4188-4198
  • Konferensbidrag (refereegranskat)abstract
    • Most action recognition models today are highly parameterized, and evaluated on datasets with appearance-wise distinct classes. It has also been shown that 2D Convolutional Neural Networks (CNNs) tend to be biased toward texture rather than shape in still image recognition tasks [19], in contrast to humans. Taken together, this raises suspicion that large video models partly learn spurious spatial texture correlations rather than to track relevant shapes over time to infer generalizable semantics from their movement. A natural way to avoid parameter explosion when learning visual patterns over time is to make use of recurrence. Biological vision consists of abundant recurrent circuitry, and is superior to computer vision in terms of domain shift generalization. In this article, we empirically study whether the choice of low-level temporal modeling has consequences for texture bias and cross-domain robustness. In order to enable a light-weight and systematic assessment of the ability to capture temporal structure, not revealed from single frames, we provide the Temporal Shape (TS) dataset, as well as modified domains of Diving48 allowing for the investigation of spatial texture bias in video models. The combined results of our experiments indicate that sound physical inductive bias such as recurrence in temporal modeling may be advantageous when robustness to domain shift is important for the task.
  •  
6.
  • Broomé, Sofia, et al. (författare)
  • Sharing pain : Using pain domain transfer for video recognition of low grade orthopedic pain in horses
  • 2022
  • Ingår i: PLOS ONE. - : Public Library of Science (PLoS). - 1932-6203. ; 17:3, s. e0263854-
  • Tidskriftsartikel (refereegranskat)abstract
    • Orthopedic disorders are common among horses, often leading to euthanasia, which often could have been avoided with earlier detection. These conditions often create varying degrees of subtle long-term pain. It is challenging to train a visual pain recognition method with video data depicting such pain, since the resulting pain behavior also is subtle, sparsely appearing, and varying, making it challenging for even an expert human labeller to provide accurate ground-truth for the data. We show that a model trained solely on a dataset of horses with acute experimental pain (where labeling is less ambiguous) can aid recognition of the more subtle displays of orthopedic pain. Moreover, we present a human expert baseline for the problem, as well as an extensive empirical study of various domain transfer methods and of what is detected by the pain recognition method trained on clean experimental pain in the orthopedic dataset. Finally, this is accompanied with a discussion around the challenges posed by real-world animal behavior datasets and how best practices can be established for similar fine-grained action recognition tasks. Our code is available at https://github.com/sofiabroome/painface-recognition.
  •  
7.
  • Bütepage, Judith, et al. (författare)
  • A Probabilistic Semi-Supervised Approach to Multi-Task Human Activity Modeling
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Human behavior is a continuous stochastic spatio-temporal process which is governed by semantic actions and affordances as well as latent factors. Therefore, video-based human activity modeling is concerned with a number of tasks such as inferring current and future semantic labels, predicting future continuous observations as well as imagining possible future label and feature sequences. In this paper we present a semi-supervised probabilistic deep latent variable model that can represent both discrete labels and continuous observations as well as latent dynamics over time. This allows the model to solve several tasks at once without explicit fine-tuning. We focus here on the tasks of action classification, detection, prediction and anticipation as well as motion prediction and synthesis based on 3D human activity data recorded with Kinect. We further extend the model to capture hierarchical label structure and to model the dependencies between multiple entities, such as a human and objects. Our experiments demonstrate that our principled approach to human activity modeling can be used to detect current and anticipate future semantic labels and to predict and synthesize future label and feature sequences. When comparing our model to state-of-the-art approaches, which are specifically designed for e.g. action classification, we find that our probabilistic formulation outperforms or is comparable to these task specific models.
  •  
8.
  • Butepage, Judith, et al. (författare)
  • Anticipating many futures : Online human motion prediction and generation for human-robot interaction
  • 2018
  • Ingår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA). - : IEEE COMPUTER SOC. - 9781538630815 ; , s. 4563-4570
  • Konferensbidrag (refereegranskat)abstract
    • Fluent and safe interactions of humans and robots require both partners to anticipate the others' actions. The bottleneck of most methods is the lack of an accurate model of natural human motion. In this work, we present a conditional variational autoencoder that is trained to predict a window of future human motion given a window of past frames. Using skeletal data obtained from RGB depth images, we show how this unsupervised approach can be used for online motion prediction for up to 1660 ms. Additionally, we demonstrate online target prediction within the first 300-500 ms after motion onset without the use of target specific training data. The advantage of our probabilistic approach is the possibility to draw samples of possible future motion patterns. Finally, we investigate how movements and kinematic cues are represented on the learned low dimensional manifold.
  •  
9.
  • Butepage, Judith, et al. (författare)
  • Deep representation learning for human motion prediction and classification
  • 2017
  • Ingår i: 30th IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017). - : IEEE. - 9781538604571 ; , s. 1591-1599
  • Konferensbidrag (refereegranskat)abstract
    • Generative models of 3D human motion are often restricted to a small number of activities and can therefore not generalize well to novel movements or applications. In this work we propose a deep learning framework for human motion capture data that learns a generic representation from a large corpus of motion capture data and generalizes well to new, unseen, motions. Using an encoding-decoding network that learns to predict future 3D poses from the most recent past, we extract a feature representation of human motion. Most work on deep learning for sequence prediction focuses on video and speech. Since skeletal data has a different structure, we present and evaluate different network architectures that make different assumptions about time dependencies and limb correlations. To quantify the learned features, we use the output of different layers for action classification and visualize the receptive fields of the network units. Our method outperforms the recent state of the art in skeletal motion prediction even though these use action specific training data. Our results show that deep feedforward networks, trained from a generic mocap database, can successfully be used for feature extraction from human motion data and that this representation can be used as a foundation for classification and prediction.
  •  
10.
  • Butepage, Judith, et al. (författare)
  • Predicting the what and how - A probabilistic semi-supervised approach to multi-task human activity modeling
  • 2019
  • Ingår i: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. - : IEEE Computer Society. - 9781728125060 ; , s. 2923-2926
  • Konferensbidrag (refereegranskat)abstract
    • Video-based prediction of human activity is usually performed on one of two levels: either a model is trained to anticipate high-level action labels or it is trained to predict future trajectories either in skeletal joint space or in image pixel space. This separation of classification and regression tasks implies that models cannot make use of the mutual information between continuous and semantic observations. However, if a model knew that an observed human wants to drink from a nearby glass, the space of possible trajectories would be highly constrained to reaching movements. Likewise, if a model had predicted a reaching trajectory, the inference of future semantic labels would rank 'lifting' more likely than 'walking'. In this work, we propose a semi-supervised generative latent variable model that addresses both of these levels by modeling continuous observations as well as semantic labels. This fusion of signals allows the model to solve several tasks, such as action detection and anticipation as well as motion prediction and synthesis, simultaneously. We demonstrate this ability on the UTKinect-Action3D dataset, which consists of noisy, partially labeled multi-action sequences. The aim of this work is to encourage research within the field of human activity modeling based on mixed categorical and continuous data.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 117
Typ av publikation
konferensbidrag (79)
tidskriftsartikel (20)
annan publikation (7)
doktorsavhandling (7)
bokkapitel (3)
forskningsöversikt (1)
visa fler...
visa färre...
Typ av innehåll
refereegranskat (100)
övrigt vetenskapligt/konstnärligt (15)
populärvet., debatt m.m. (2)
Författare/redaktör
Kjellström, Hedvig, ... (59)
Kjellström, Hedvig (56)
Kragic, Danica (20)
Ek, Carl Henrik (12)
Kucherenko, Taras, 1 ... (12)
Zhang, Cheng (11)
visa fler...
Romero, Javier (11)
Pieropan, Alessandro (10)
Beskow, Jonas (8)
Broomé, Sofia (8)
Henter, Gustav Eje, ... (7)
Engwall, Olov (7)
Haubro Andersen, Pia (7)
Butepage, Judith (7)
Kragic, Danica, 1971 ... (6)
Bälter, Olle (6)
Tu, Ruibo (6)
Klasson, Marcus (5)
Salvi, Giampiero (4)
Öster, Anne-Marie (4)
Leite, Iolanda (4)
Hernlund, Elin (4)
Bergström, Niklas (4)
Ishikawa, Masatoshi (4)
Zhang, Kun (4)
Ackermann, Paul (3)
Hagman, Göran (3)
Kivipelto, Miia (3)
Nagy, Rajmund (3)
Ask, Katrina (3)
Azizpour, Hossein, 1 ... (3)
Pokorny, Florian T. (3)
Pauwels, Karl (3)
Black, Michael J. (3)
Zhang, C. (2)
Håkansson, Krister (2)
Stefanov, Kalin (2)
Akenine, Ulrika (2)
Alexanderson, Simon (2)
Neff, Michael (2)
Folkesson, John, Ass ... (2)
Rhodin, Marie (2)
Björkman, Mårten, 19 ... (2)
Ek, C. H. (2)
Bertilson, Bo C. (2)
Bech Gleerup, Karina (2)
Moell, Birger (2)
Feix, Thomas (2)
Hamesse, Charles (2)
Pieropan, Alessandro ... (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (117)
Sveriges Lantbruksuniversitet (6)
Luleå tekniska universitet (1)
Karlstads universitet (1)
Språk
Engelska (116)
Svenska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (89)
Teknik (20)
Lantbruksvetenskap (8)
Medicin och hälsovetenskap (2)
Humaniora (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy