SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "(WFRF:(Barua Shaibal)) srt2:(2020-2024) srt2:(2022)"

Sökning: (WFRF:(Barua Shaibal)) srt2:(2020-2024) > (2022)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Ahmed, Mobyen Uddin, Dr, 1976-, et al. (författare)
  • When a CBR in Hand is Better than Twins in the Bush
  • 2022
  • Ingår i: CEUR Workshop Proceedings, vol. 3389. - : CEUR-WS. ; , s. 141-152
  • Konferensbidrag (refereegranskat)abstract
    • AI methods referred to as interpretable are often discredited as inaccurate by supporters of the existence of a trade-off between interpretability and accuracy. In many problem contexts however this trade-off does not hold. This paper discusses a regression problem context to predict flight take-off delays where the most accurate data regression model was trained via the XGBoost implementation of gradient boosted decision trees. While building an XGB-CBR Twin and converting the XGBoost feature importance into global weights in the CBR model, the resultant CBR model alone provides the most accurate local prediction, maintains the global importance to provide a global explanation of the model, and offers the most interpretable representation for local explanations. This resultant CBR model becomes a benchmark of accuracy and interpretability for this problem context, and hence it is used to evaluate the two additive feature attribute methods SHAP and LIME to explain the XGBoost regression model. The results with respect to local accuracy and feature attribution lead to potentially valuable future work. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
  •  
2.
  • Degas, A., et al. (författare)
  • A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management : Current Trends and Development with Future Research Trajectory
  • 2022
  • Ingår i: Applied Sciences. - : MDPI. - 2076-3417. ; 12:3
  • Forskningsöversikt (refereegranskat)abstract
    • Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by international organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI), which is one of the most researched topics in computer science, has not quite reached end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and the limitations, then synthesise the findings into a conceptual framework, named the DPP (Descriptive, Predictive, Prescriptive) model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
  •  
3.
  • Hurter, C., et al. (författare)
  • Usage of more transparent and explainable conflict resolution algorithm : Air traffic controller feedback
  • 2022
  • Ingår i: Transportation Research Procedia. - : Elsevier B.V.. - 2352-1457 .- 2352-1465. ; 66:C, s. 270-278
  • Tidskriftsartikel (refereegranskat)abstract
    • Recently, Artificial intelligence (AI) algorithms have received increasable interest in various application domains including in Air Transportation Management (ATM). Different AI in particular Machine Learning (ML) algorithms are used to provide decision support in autonomous decision-making tasks in the ATM domain e.g., predicting air transportation traffic and optimizing traffic flows. However, most of the time these automated systems are not accepted or trusted by the intended users as the decisions provided by AI are often opaque, non-intuitive and not understandable by human operators. Safety is the major pillar to air traffic management, and no black box process can be inserted in a decision-making process when human life is involved. To address this challenge related to transparency of the automated system in the ATM domain, we investigated AI methods in predicting air transportation traffic conflict and optimizing traffic flows based on the domain of Explainable Artificial Intelligence (XAI). Here, AI models’ explainability in terms of understanding a decision i.e., post hoc interpretability and understanding how the model works i.e., transparency can be provided for air traffic controllers. In this paper, we report our research directions and our findings to support better decision making with AI algorithms with extended transparency.
  •  
4.
  • Islam, Mir Riyanul, Dr. 1991-, et al. (författare)
  • A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks
  • 2022
  • Ingår i: Applied Sciences. - : MDPI. - 2076-3417. ; 12:3
  • Forskningsöversikt (refereegranskat)abstract
    • Artificial intelligence (AI) and machine learning (ML) have recently been radically improved and are now being employed in almost every application domain to develop automated or semi-automated systems. To facilitate greater human acceptability of these systems, explainable artificial intelligence (XAI) has experienced significant growth over the last couple of years with the development of highly accurate models but with a paucity of explainability and interpretability. The literature shows evidence from numerous studies on the philosophy and methodologies of XAI. Nonetheless, there is an evident scarcity of secondary studies in connection with the application domains and tasks, let alone review studies following prescribed guidelines, that can enable researchers’ understanding of the current trends in XAI, which could lead to future research for domain- and application-specific method development. Therefore, this paper presents a systematic literature review (SLR) on the recent developments of XAI methods and evaluation metrics concerning different application domains and tasks. This study considers 137 articles published in recent years and identified through the prominent bibliographic databases. This systematic synthesis of research articles resulted in several analytical findings: XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations. Research studies have been performed on the addition of explanations to widely used AI/ML models for expert users. However, more attention is required to generate explanations for general users from sensitive domains such as finance and the judicial system.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy