SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Holmberg Lars) ;lar1:(mau)"

Sökning: WFRF:(Holmberg Lars) > Malmö universitet

  • Resultat 1-10 av 17
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ghajargar, Maliheh, 1980-, et al. (författare)
  • The UX of Interactive Machine Learning
  • 2020
  • Ingår i: NordiCHI 2020, 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society. - New York, USA : Association for Computing Machinery (ACM).
  • Konferensbidrag (refereegranskat)abstract
    • Machine Learning (ML) has been a prominent area of research within Artificial Intelligence (AI). ML uses mathematical models to recognize patterns in large and complex data sets to aid decision making in different application areas, such as image and speech recognition, consumer recommendations, fraud detection and more. ML systems typically go through a training period in which the system encounters and learns about the data; further, this training often requires some degree of human intervention. Interactive machine learning (IML) refers to ML applications that depend on continuous user interaction. From an HCI perspective, how humans interact with and experience ML models in training is the main focus of this workshop proposal. In this workshop we focus on the user experience (UX) of Interactive Machine Learning, a topic with implications not only for usability but also for the long-term success of the IML systems themselves.
  •  
2.
  • Holmberg, Lars (författare)
  • A Conceptual Approach to Explainable Neural Networks
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • The success of neural networks largely builds on their ability to create internal knowledge representations from real-world high-dimensional data, such as images, sound, or text. Approaches to extract and present these representations, in order to explain a neural network’s decision, is an active and multifaceted research field. To gain a deeper understanding of a central aspect of this field, we performed a targeted literature review focusing on research that aims to associate internal representations with human understandable concepts. By using deductive nomological explanations combined with causality theories as an analytical lens, we analyse nine carefully selected research papers. We find our analytical lens, the explanation structure and causality, useful to understand what can be expected, and not expected, from explanations inferred from neural networks. The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal: is it (a) understanding the ML model, (b) the training data or (c) actionable explanations that are true-to-the-domain?
  •  
3.
  • Holmberg, Lars, et al. (författare)
  • A Feature Space Focus in Machine Teaching
  • 2020
  • Ingår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). - 9781728147161 - 9781728147178
  • Konferensbidrag (refereegranskat)abstract
    • Contemporary Machine Learning (ML) often focuseson large existing and labeled datasets and metrics aroundaccuracy and performance. In pervasive online systems, conditionschange constantly and there is a need for systems thatcan adapt. In Machine Teaching (MT) a human domain expertis responsible for the knowledge transfer and can thus addressthis. In my work, I focus on domain experts and the importanceof, for the ML system, available features and the space they span.This space confines the, to the ML systems, observable fragmentof the physical world. My investigation of the feature space isgrounded in a conducted study and related theories. The resultof this work is applicable when designing systems where domainexperts have a key role as teachers.
  •  
4.
  • Holmberg, Lars (författare)
  • Ageing and sexing birds
  • 2023
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Ageing and sexing birds require specialist knowledge and training concerning which characteristics to focus on for different species. An expert can formulate an explanation for a classification using these characteristics and, additionally, identify anomalies. Some characteristics require practical training, for example, the difference between moulted and non-moulted feathers, while some knowledge, like feather taxonomy and moulting patterns, can be learned without extensive practical training. An explanation formulated for a classification, by a human, stands in sharp contrast to an explanation produced by a trained neural network. These machine explanations are more an answer to a how-question, related to the inner workings of the neural network, not an answer to a why-question, presenting domain-related characteristics useful for a domain expert. For machine-created explanations to be trustworthy neural networks require a static use context and representative independent and identically distributed training data. These prerequisites do seldom hold in real-world settings. Some challenges related to this are neural networks' inability to identify exemplars outside the training distribution and aligning internal knowledge creation with characteristics used in the target domain. These types of questions are central in the active research field of explainable artificial intelligence (XAI), but, there is a lack of hands-on experiments involving domain experts. This work aims to address the above issues with the goal of producing a prototype where domain experts can train a tool that builds on human expert knowledge in order to produce useful explanations. By using internalised domain expertise we aim at a tool that can produce useful explanations and even new insights for the domain. By working together with domain experts from Ottenby Observatory our goal is to address central XAI challenges and, at the same time, add new perspectives useful to determine age and sex on birds. 
  •  
5.
  • Holmberg, Lars, et al. (författare)
  • Contextual machine teaching
  • 2020
  • Ingår i: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). - : IEEE. - 9781728147161 - 9781728147178
  • Konferensbidrag (refereegranskat)abstract
    • Machine learning research today is dominated by atechnocentric perspective and in many cases disconnected fromthe users of the technology. The machine teaching paradigm insteadshifts the focus from machine learning experts towards thedomain experts and users of machine learning technology. Thisshift opens up for new perspectives on the current use of machinelearning as well as new usage areas to explore. In this study,we apply and map existing machine teaching principles ontoa contextual machine teaching implementation in a commutingsetting. The aim is to highlight areas in machine teaching theorythat requires more attention. The main contribution of this workis an increased focus on available features, the features space andthe potential to transfer some of the domain expert’s explanatorypowers to the machine learning system.
  •  
6.
  • Holmberg, Lars, et al. (författare)
  • Deep Learning, generalisation and concepts
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Central to deep learning is an ability to generalise within a target domain consistent with human beliefs within the same domain. A label inferred by the neural network then maps to a human mental representation of a, to the label, corresponding concept. If an explanation concerning why a specific decision is promoted it is important that we move from average case performance metrics towards interpretable explanations that build on human understandable concepts connected to the promoted label. In this work, we use Explainable Artificial Intelligence (XAI) methods to investigate if internal knowledge representations in trained neural networks are aligned and generalise in correspondence to human mental representations. Our findings indicate an, in neural networks, epistemic misalignment between machine and human knowledge representations. Consequently, if the goal is classifications explainable for en users we can question the usefulness of neural networks trained without considering concept alignment. 
  •  
7.
  • Holmberg, Lars, et al. (författare)
  • Evaluating Interpretability in Machine Teaching
  • 2020
  • Ingår i: Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness. - Cham : Springer. - 9783030519988 - 9783030519995 ; , s. 54-65
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Building interpretable machine learning agents is a challenge that needs to be addressed to make the agents trustworthy and align the usage of the technology with human values. In this work, we focus on how to evaluate interpretability in a machine teaching setting, a settingthat involves a human domain expert as a teacher in relation to a machine learning agent. By using a prototype in a study, we discuss theinterpretability denition and show how interpretability can be evaluatedon a functional-, human- and application level. We end the paperby discussing open questions and suggestions on how our results can be transferable to other domains.
  •  
8.
  • Holmberg, Lars (författare)
  • Exploring Out-of-Distribution in Image Classification for Neural Networks Via Concepts
  • 2023
  • Ingår i: Proceedings of Eighth International Congress on Information and Communication Technology. - : Springer. - 9789819932429 - 9789819932436 ; , s. 155-171
  • Konferensbidrag (refereegranskat)abstract
    • The currently dominating artificial intelligence and machine learning technology, neural networks, builds on inductive statistical learning processes. Being void of knowledge that can be used deductively these systems cannot distinguish exemplars part of the target domain from those not part of it. This ability is critical when the aim is to build human trust in real-world settings and essential to avoid usage in domains wherein a system cannot be trusted. In the work presented here, we conduct two qualitative contextual user studies and one controlled experiment to uncover research paths and design openings for the sought distinction. Through our experiments, we find a need to refocus from average case metrics and benchmarking datasets toward systems that can be falsified. The work uncovers and lays bare the need to incorporate and internalise a domain ontology in the systems and/or present evidence for a decision in a fashion that allows a human to use our unique knowledge and reasoning capability. Additional material and code to reproduce our experiments can be found at https://github.com/k3larra/ood.
  •  
9.
  • Holmberg, Lars (författare)
  • Human In Command Machine Learning
  • 2021
  • Licentiatavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Machine Learning (ML) and Artificial Intelligence (AI) impact many aspects of human life, from recommending a significant other to assist the search for extraterrestrial life. The area develops rapidly and exiting unexplored design spaces are constantly laid bare. The focus in this work is one of these areas; ML systems where decisions concerning ML model training, usage and selection of target domain lay in the hands of domain experts. This work is then on ML systems that function as a tool that augments and/or enhance human capabilities. The approach presented is denoted Human In Command ML (HIC-ML) systems. To enquire into this research domain design experiments of varying fidelity were used. Two of these experiments focus on augmenting human capabilities and targets the domains commuting and sorting batteries. One experiment focuses on enhancing human capabilities by identifying similar hand-painted plates. The experiments are used as illustrative examples to explore settings where domain experts potentially can: independently train an ML model and in an iterative fashion, interact with it and interpret and understand its decisions. HIC-ML should be seen as a governance principle that focuses on adding value and meaning to users. In this work, concrete application areas are presented and discussed. To open up for designing ML-based products for the area an abstract model for HIC-ML is constructed and design guidelines are proposed. In addition, terminology and abstractions useful when designing for explicability are presented by imposing structure and rigidity derived from scientific explanations. Together, this opens up for a contextual shift in ML and makes new application areas probable, areas that naturally couples the usage of AI technology to human virtues and potentially, as a consequence, can result in a democratisation of the usage and knowledge concerning this powerful technology.
  •  
10.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 17

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy