SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Anjomshoae Sule) "

Search: WFRF:(Anjomshoae Sule)

  • Result 1-10 of 10
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Anjomshoae, Sule, 1985- (author)
  • Context-based explanations for machine learning predictions
  • 2022
  • Doctoral thesis (other academic/artistic)abstract
    • In recent years, growing concern regarding trust in algorithmic decision-making has drawn attention to more transparent and interpretable models. Laws and regulations are moving towards requiring this functionality from information systems to prevent unintended side effects. Such as the European Union's General Data Protection Regulations (GDPR) set out the right to be informed regarding machine-generated decisions. Individuals affected by these decisions can question, confront and challenge the inferences automatically produced by machine learning models. Consequently, such matters necessitate AI systems to be transparent and explainable for various practical applications.Furthermore, explanations help evaluate these systems' strengths and limitations, thereby fostering trustworthiness. As important as it is, existing studies mainly focus on creating mathematically interpretable models or explaining black-box algorithms with intrinsically interpretable surrogate models. In general, these explanations are intended for technical users to evaluate the correctness of a model and are often hard to interpret by general users.  Given a critical need for methods that consider end-user requirements, this thesis focuses on generating intelligible explanations for predictions made by machine learning algorithms. As a starting point, we present the outcome of a systematic literature review of the existing research on generating and communicating explanations in goal-driven eXplainable AI (XAI), such as agents and robots. These are known for their ability to communicate their decisions in human understandable terms. Influenced by that, we discuss the design and evaluation of our proposed explanation methods for black-box algorithms in different machine learning applications, including image recognition, scene classification, and disease prediction.Taken together, the methods and tools presented in this thesis could be used to explain machine learning predictions or as a baseline to compare to other explanation techniques, enabling interpretation indicators for experts and non-technical users. The findings would also be of interest to domains using machine learning models for high-stake decision-making to investigate the practical utility of proposed explanation methods.
  •  
2.
  • Anjomshoae, Sule, et al. (author)
  • Context-based image explanations for deep neural networks
  • 2021
  • In: Image and Vision Computing. - : Elsevier. - 0262-8856 .- 1872-8138. ; 116
  • Journal article (peer-reviewed)abstract
    • With the increased use of machine learning in decision-making scenarios, there has been a growing interest in explaining and understanding the outcomes of machine learning models. Despite this growing interest, existing works on interpretability and explanations have been mostly intended for expert users. Explanations for general users have been neglected in many usable and practical applications (e.g., image tagging, caption generation). It is important for non-technical users to understand features and how they affect an instance-specific prediction to satisfy the need for justification. In this paper, we propose a model-agnostic method for generating context-based explanations aiming for general users. We implement partial masking on segmented components to identify the contextual importance of each segment in scene classification tasks. We then generate explanations based on feature importance. We present visual and text-based explanations: (i) saliency map presents the pertinent components with a descriptive textual justification, (ii) visual map with a color bar graph showing the relative importance of each feature for a prediction. Evaluating the explanations using a user study (N = 50), we observed that our proposed explanation method visually outperformed existing gradient and occlusion based methods. Hence, our proposed explanation method could be deployed to explain models’ decisions to non-expert users in real-world applications.
  •  
3.
  • Anjomshoae, Sule, et al. (author)
  • Explainable Agents and Robots : Results from a Systematic Literature Review
  • 2019
  • In: AAMAS '19: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. - : International Foundation for Autonomous Agents and MultiAgent Systems. - 9781450363099 ; , s. 1078-1088
  • Conference paper (peer-reviewed)abstract
    • Humans are increasingly relying on complex systems that heavily adopts Artificial Intelligence (AI) techniques. Such systems are employed in a growing number of domains, and making them explainable is an impelling priority. Recently, the domain of eXplainable Artificial Intelligence (XAI) emerged with the aims of fostering transparency and trustworthiness. Several reviews have been conducted. Nevertheless, most of them deal with data-driven XAI to overcome the opaqueness of black-box algorithms. Contributions addressing goal-driven XAI (e.g., explainable agency for robots and agents) are still missing. This paper aims at filling this gap, proposing a Systematic Literature Review. The main findings are (i) a considerable portion of the papers propose conceptual studies, or lack evaluations or tackle relatively simple scenarios; (ii) almost all of the studied papers deal with robots/agents explaining their behaviors to the human users, and very few works addressed inter-robot (inter-agent) explainability. Finally, (iii) while providing explanations to non-expert users has been outlined as a necessity, only a few works addressed the issues of personalization and context-awareness
  •  
4.
  • Anjomshoae, Sule, 1985-, et al. (author)
  • Explaining graph convolutional network predictions for clinicians : an explainable AI approach to Alzheimer’s disease classification
  • 2024
  • In: Frontiers in Artificial Intelligence. - : Frontiers Media S.A.. - 2624-8212. ; 6
  • Journal article (peer-reviewed)abstract
    • Introduction: Graph-based representations are becoming more common in the medical domain, where each node defines a patient, and the edges signify associations between patients, relating individuals with disease and symptoms in a node classification task. In this study, a Graph Convolutional Networks (GCN) model was utilized to capture differences in neurocognitive, genetic, and brain atrophy patterns that can predict cognitive status, ranging from Normal Cognition (NC) to Mild Cognitive Impairment (MCI) and Alzheimer's Disease (AD), on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Elucidating model predictions is vital in medical applications to promote clinical adoption and establish physician trust. Therefore, we introduce a decomposition-based explanation method for individual patient classification.Methods: Our method involves analyzing the output variations resulting from decomposing input values, which allows us to determine the degree of impact on the prediction. Through this process, we gain insight into how each feature from various modalities, both at the individual and group levels, contributes to the diagnostic result. Given that graph data contains critical information in edges, we studied relational data by silencing all the edges of a particular class, thereby obtaining explanations at the neighborhood level.Results: Our functional evaluation showed that the explanations remain stable with minor changes in input values, specifically for edge weights exceeding 0.80. Additionally, our comparative analysis against SHAP values yielded comparable results with significantly reduced computational time. To further validate the model's explanations, we conducted a survey study with 11 domain experts. The majority (71%) of the responses confirmed the correctness of the explanations, with a rating of above six on a 10-point scale for the understandability of the explanations.Discussion: Strategies to overcome perceived limitations, such as the GCN's overreliance on demographic information, were discussed to facilitate future adoption into clinical practice and gain clinicians' trust as a diagnostic decision support system.
  •  
5.
  • Anjomshoae, Sule, et al. (author)
  • Explanations of black-box model predictions by contextual importance and utility
  • 2019
  • In: Explainable, transparent autonomous agents and multi-agent systems. - Cham : Springer. - 9783030303907 - 9783030303914 ; , s. 95-109
  • Book chapter (peer-reviewed)abstract
    • The significant advances in autonomous systems together with an immensely wider application domain have increased the need for trustable intelligent systems. Explainable artificial intelligence is gaining considerable attention among researchers and developers to address this requirement. Although there is an increasing number of works on interpretable and trans- parent machine learning algorithms, they are mostly intended for the technical users. Explanations for the end-user have been neglected in many usable and practical applications. In this work, we present the Contextual Importance (CI) and Contextual Utility (CU) concepts to extract explanations that are easily understandable by experts as well as novice users. This method explains the prediction results without transforming the model into an interpretable one. We present an example of providing explanations for linear and non-linear models to demonstrate the generalizability of the method. CI and CU are numerical values that can be represented to the user in visuals and natural language form to justify actions and explain reasoning for individual instances, situations, and contexts. We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i.e. the causes of an individual prediction) and contrastive explanation (i.e. contrasting instance against the instance of interest). The experimental results show the feasibility and validity of the provided explanation methods.
  •  
6.
  • Anjomshoae, Sule, 1985-, et al. (author)
  • Py-CIU: A Python Library for Explaining Machine Learning Predictions Using Contextual Importance and Utility
  • 2020
  • In: Proceedings.
  • Conference paper (other academic/artistic)abstract
    • In this paper, we present the Py-CIU library, a generic Python tool for applying the Contextual Importance and Utility (CIU) explainable machine learning method. CIU uses concepts from decision theory to explain a machine learning model’s prediction specific to a given data point by investigating the importance and usefulness of individual features (or feature combinations) to a prediction. The explanations aim to be intelligible to machine learning experts as well as non-technical users. The library can be applied to any black-box model that outputs a prediction value for all classes
  •  
7.
  • Anjomshoae, Sule, et al. (author)
  • Visual Explanations for DNNs with Contextual Importance
  • 2021
  • In: Explainable and Transparent AI and Multi-Agent Systems. - Cham : Springer. - 9783030820169 - 9783030820176 ; , s. 83-96
  • Conference paper (peer-reviewed)abstract
    • Autonomous agents and robots with vision capabilities powered by machine learning algorithms such as Deep Neural Networks (DNNs) are taking place in many industrial environments. While DNNs have improved the accuracy in many prediction tasks, it is shown that even modest disturbances in their input produce erroneous results. Such errors have to be detected and dealt with for making the deployment of DNNs secure in real-world applications. Several explanation methods have been proposed to understand the inner workings of these models. In this paper, we present how Contextual Importance (CI) can make DNN results more explainable in an image classification task without peeking inside the network. We produce explanations for individual classifications by perturbing an input image through over-segmentation and evaluating the effect on a prediction score. Then the output highlights the most contributing segments for a prediction. Results are compared with two explanation methods, namely mask perturbation and LIME. The results for the MNIST hand-written digit dataset produced by the three methods show that CI provides better visual explainability.
  •  
8.
  • Omeiza, Daniel, et al. (author)
  • From Spoken Thoughts to Automated Driving Commentary : Predicting and Explaining Intelligent Vehicles' Actions
  • 2022
  • In: 2022 IEEE Intelligent Vehicles Symposium (IV). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665488211 ; , s. 1040-1047
  • Conference paper (peer-reviewed)abstract
    • In commentary driving, drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions, thereby assisting a driver or an end-user during driving operations in challenging and safety-critical scenarios. In this paper, we conducted a field study in which we deployed a research vehicle in an urban environment to obtain data. While collecting sensor data of the vehicle's surroundings, we obtained driving commentary from a driving instructor using the think-aloud protocol. We analysed the driving commentary and uncovered an explanation style; the driver first announces his observations, announces his plans, and then makes general remarks. He also makes counterfactual comments. We successfully demonstrated how factual and counterfactual natural language explanations that follow this style could be automatically generated using a transparent tree-based approach. Generated explanations for longitudinal actions (e.g., stop and move) were deemed more intelligible and plausible by human judges compared to lateral actions, such as lane changes. We discussed how our approach can be built on in the future to realise more robust and effective explainability for driver assistance as well as partial and conditional automation of driving functions.
  •  
9.
  • Omeiza, Daniel, et al. (author)
  • Towards Explainable and Trustworthy Autonomous Physical Systems
  • 2021
  • In: CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI '21 Extended Abstracts). - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450380959
  • Conference paper (peer-reviewed)abstract
    • The safe deployment of autonomous physical systems in real-world scenarios requires them to be explainable and trustworthy, especially in critical domains. In contrast with g€black-box' systems, explainable and trustworthy autonomous physical systems will lend themselves to easy assessments by system designers and regulators. This promises to pave ways for easy improvements that can lead to enhanced performance, and as well, increased public trust. In this one-day virtual workshop, we aim to gather a globally distributed group of researchers and practitioners to discuss the opportunities and social challenges in the design, implementation, and deployment of explainable and trustworthy autonomous physical systems, especially in a post-pandemic era. Interactions will be fostered through panel discussions and a series of spotlight talks. To ensure lasting impact of the workshop, we will conduct a pre-workshop survey which will examine the public perception of the trustworthiness of autonomous physical systems. Further, we will publish a summary report providing details about the survey as well as the identified challenges resulting from the workshop's panel discussions.
  •  
10.
  • Othman, Nur Zuraifah Syazrah, et al. (author)
  • Perspectives of gestures for gestural-based interaction systems : towards natural interaction
  • 2019
  • In: Human IT. - Borås : Borås universitet. - 1402-1501 .- 1402-151X. ; 14:3, s. 1-25
  • Journal article (peer-reviewed)abstract
    • A frequently mentioned benefit of gesture-based input to computing systems is that it provides naturalness in interaction. However, it is not uncommon to find gesture sets consisting of arbitrary (hand) formations with illogically-mapped functions. This defeat the purpose of using gestures as a means to facilitate natural interaction. The root of the issue seems to stem from a separation between what is deemed as gesture in the computing field and what is deemed as gesture linguistically. To find a common ground, this paper explores the fundamental aspects of gestures in the literature of psycho-linguistic-based studies and HCI-based studies. The discussion focuses on the connection between the two perspectives-in the definition aspect through the concept of meaning and context, and in the classification aspect through the mapping of tasks (manipulative or communicative) to gesture functions (ergotic, epistemic or semiotic). By highlighting how these two perspectives interrelate, this paper provides a basis for research works that intend to propose gestures as the interaction modality for interactive systems.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 10

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view