SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Methnani Leila) "

Sökning: WFRF:(Methnani Leila)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Methnani, Leila, et al. (författare)
  • Clash of the explainers : argumentation for context-appropriate explanations
  • 2024
  • Ingår i: Artificial Intelligence. ECAI 2023. - : Springer. - 9783031503955 - 9783031503962 ; , s. 7-23
  • Konferensbidrag (refereegranskat)abstract
    • Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation. If—in general—no single explanation technique surpasses the rest, then reasoning over the available methods is required in order to select one that is context-appropriate. Due to the transparency they afford, we propose employing argumentation techniques to reach an agreement over the most suitable explainers from a given set of possible explainers.In this paper, we propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest. By formalizing supporting premises—and inferences—we can map stakeholder characteristics to those of explanation techniques. This allows us to reason over the techniques and prioritise the best one for the given context, while also offering transparency into the selection decision.
  •  
2.
  • Methnani, Leila, et al. (författare)
  • Embracing AWKWARD! Real-time Adjustment of Reactive Plans Using Social Norms
  • 2022
  • Ingår i: Coordination, organizations, institutions, norms, and ethics for governance of multi-agent systems XV. - Cham : Springer Nature. - 9783031208447 - 9783031208454 ; , s. 54-72
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents the AWKWARD agent architecture for the development of agents in Multi-Agent Systems. AWKWARD agents can have their plans re-configured in real time to align with social role requirements under changing environmental and social circumstances. The proposed hybrid architecture makes use of Behaviour Oriented De-sign (BOD) to develop agents with reactive planning and of the well-established OperA framework to provide organisational, social, and inter-action definitions in order to validate and adjust agents’ behaviours. Together, OperA and BOD can achieve real-time adjustment of agent plans for evolving social roles, while providing the additional benefit of transparency into the interactions that drive this behavioural change in individual agents. We present this architecture to motivate the bridging between traditional symbolic- and behaviour-based AI communities, where such combined solutions can help MAS researchers in their pursuit of building stronger, more robust intelligent agent teams. We use DOTA2—a game where success is heavily dependent on social interactions—as a medium to demonstrate a sample implementation of our proposed hybrid architecture.
  •  
3.
  • Methnani, Leila, et al. (författare)
  • Let Me Take Over : Variable Autonomy for Meaningful Human Control
  • 2021
  • Ingår i: Frontiers in Artificial Intelligence. - : Frontiers Media S.A.. - 2624-8212. ; 4
  • Tidskriftsartikel (refereegranskat)abstract
    • As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
  •  
4.
  • Methnani, Leila, et al. (författare)
  • Operationalising AI ethics : conducting socio-technical assessment
  • 2023
  • Ingår i: Human-Centered Artificial Intelligence. - : Springer. - 9783031243486 ; , s. 304-321
  • Konferensbidrag (refereegranskat)abstract
    • Several high profile incidents that involve Artificial Intelligence (AI) have captured public attention and increased demand for regulation. Low public trust and attitudes towards AI reinforce the need for concrete policy around its development and use. However, current guidelines and standards rolled out by institutions globally are considered by many as high-level and open to interpretation, making them difficult to put into practice. This paper presents ongoing research in the field of Responsible AI and explores numerous methods of operationalising AI ethics. If AI is to be effectively regulated, it must not be considered as a technology alone—AI is embedded in the fabric of our societies and should thus be treated as a socio-technical system, requiring multi-stakeholder involvement and employment of continuous value-based methods of assessment. When putting guidelines and standards into practice, context is of critical importance. The methods and frameworks presented in this paper emphasise this need and pave the way towards operational AI ethics.
  •  
5.
  • Methnani, Leila, et al. (författare)
  • Who's in charge here? a survey on trustworthy AI in variable autonomy robotic systems
  • 2024
  • Ingår i: ACM Computing Surveys. - : Association for Computing Machinery (ACM). - 0360-0300 .- 1557-7341. ; 56:7
  • Tidskriftsartikel (refereegranskat)abstract
    • This article surveys the Variable Autonomy (VA) robotics literature that considers two contributory elements to Trustworthy AI: transparency and explainability. These elements should play a crucial role when designing and adopting robotic systems, especially in VA where poor or untimely adjustments of the system's level of autonomy can lead to errors, control conflicts, user frustration, and ultimate disuse of the system. Despite this need, transparency and explainability is, to the best of our knowledge, mostly overlooked in VA robotics literature or is not considered explicitly. In this article, we aim to present and examine the most recent contributions to the VA literature concerning transparency and explainability. In addition, we propose a way of thinking about VA by breaking these two concepts down based on: the mission of the human-robot team; who the stakeholder is; what needs to be made transparent or explained; why they need it; and how it can be achieved. Last, we provide insights and propose ways to move VA research forward. Our goal with this article is to raise awareness and inter-community discussions among the Trustworthy AI and the VA robotics communities.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy