SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Theodorou Andreas Dr) "

Sökning: WFRF:(Theodorou Andreas Dr)

  • Resultat 1-20 av 20
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Methnani, Leila, et al. (författare)
  • Embracing AWKWARD! Real-time Adjustment of Reactive Plans Using Social Norms
  • 2022
  • Ingår i: Coordination, organizations, institutions, norms, and ethics for governance of multi-agent systems XV. - Cham : Springer Nature. - 9783031208447 - 9783031208454 ; , s. 54-72
  • Konferensbidrag (refereegranskat)abstract
    • This paper presents the AWKWARD agent architecture for the development of agents in Multi-Agent Systems. AWKWARD agents can have their plans re-configured in real time to align with social role requirements under changing environmental and social circumstances. The proposed hybrid architecture makes use of Behaviour Oriented De-sign (BOD) to develop agents with reactive planning and of the well-established OperA framework to provide organisational, social, and inter-action definitions in order to validate and adjust agents’ behaviours. Together, OperA and BOD can achieve real-time adjustment of agent plans for evolving social roles, while providing the additional benefit of transparency into the interactions that drive this behavioural change in individual agents. We present this architecture to motivate the bridging between traditional symbolic- and behaviour-based AI communities, where such combined solutions can help MAS researchers in their pursuit of building stronger, more robust intelligent agent teams. We use DOTA2—a game where success is heavily dependent on social interactions—as a medium to demonstrate a sample implementation of our proposed hybrid architecture.
  •  
2.
  • Aler Tubella, Andrea, 1990-, et al. (författare)
  • Governance by glass-box : implementing transparent moral bounds for AI behaviour
  • 2019
  • Ingår i: Proceedings of the 28th International Joint Conference on Artificial Intelligence. - California : International Joint Conferences on Artificial Intelligence Organization. ; , s. 5787-5793
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Artificial Intelligence (AI) applications are being used to predict and assess behaviour in multiple domains which directly affect human well-being. However, if AI is to improve people’s lives, then people must be able to trust it, by being able to understand what the system is doing and why. Although transparency is often seen as the requirementin this case, realistically it might not always be possible, whereas the need to ensure that the system operates within set moral bounds remains.In this paper, we present an approach to evaluate the moral bounds of an AI system based on the monitoring of its inputs and outputs. We place a ‘Glass-Box’ around the system by mapping moral values into explicit verifiable norms that constrain inputs and outputs, in such a way that if these remain within the box we can guarantee that the system adheres to the value. The focus on inputs and outputs allows for the verification and comparison of vastly different intelligent systems; from deep neural networks to agent-based systems.The explicit transformation of abstract moral values into concrete norms brings great benefits interms of explainability; stakeholders know exactly how the system is interpreting and employing relevant abstract moral human values and calibrate their trust accordingly. Moreover, by operating at a higher level we can check the compliance of the system with different interpretations of the same value.
  •  
3.
  • Aler Tubella, Andrea, 1990-, et al. (författare)
  • Interrogating the black box : Transparency through information-seeking dialogues
  • 2021
  • Ingår i: Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS. - : International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS). - 9781713832621 ; , s. 106-114
  • Konferensbidrag (refereegranskat)abstract
    • This paper is preoccupied with the following question: given a (possibly opaque) learning system, how can we understand whether its behaviour adheres to governance constraints? The answer can be quite simple: we just need to “ask” the system about it. We propose to construct an investigator agent to query a learning agent- the suspect agent- to investigate its adherence to a given ethical policy in the context of an information-seeking dialogue, modeled in formal argumentation settings. This formal dialogue framework is the main contribution of this paper. Through it, we break down compliance checking mechanisms into three modular components, each of which can be tailored to various needs in a vast amount of ways: an investigator agent, a suspect agent, and an acceptance protocol determining whether the responses of the suspect agent comply with the policy. This acceptance protocol presents a fundamentally different approach to aggregation: rather than using quantitative methods to deal with the non-determinism of a learning system, we leverage the use of argumentation semantics to investigate the notion of properties holding consistently. Overall, we argue that the introduced formal dialogue framework opens many avenues both in the area of compliance checking and in the analysis of properties of opaque systems.
  •  
4.
  •  
5.
  • Bogani, Ronny, et al. (författare)
  • Garbage in, toxic data out : a proposal for ethical artificial intelligence sustainability impact statements
  • 2023
  • Ingår i: AI and Ethics. - : Springer Nature. - 2730-5953 .- 2730-5961. ; 3, s. 1135-1142
  • Tidskriftsartikel (refereegranskat)abstract
    • Data and autonomous systems are taking over our lives, from healthcare to smart homes very few aspects of our day to day are not permeated by them. The technological advances enabled by these technologies are limitless. However, with advantages so too come challenges. As these technologies encompass more and more aspects of our lives, we are forgetting the ethical, legal, safety and moral concerns that arise as an outcome of integrating our lives with technology. In this work, we study the lifecycle of artificial intelligence from data gathering to deployment, providing a structured analytical assessment of the potential ethical, safety and legal concerns. The paper then presents the foundations for the first ethical artificial intelligence sustainability statement to guide future development of AI in a safe and sustainable manner.
  •  
6.
  • Brännström, Mattias, et al. (författare)
  • Let it RAIN for social good
  • 2022
  • Ingår i: Proceedings of the Workshop on Artificial Intelligence Safety 2022 (AISafety 2022). - : CEUR-WS.
  • Konferensbidrag (refereegranskat)abstract
    • Artificial Intelligence (AI) as a highly transformative technology take on a special role as both an enabler and a threat to UN Sustainable Development Goals (SDGs). AI Ethics and emerging high-level policy efforts stand at the pivot point between these outcomes but is barred from effect due the abstraction gap between high-level values and responsible action. In this paper the Responsible Norms (RAIN) framework is presented, bridging this gap thereby enabling effective high-level control of AI impact. With effective and operationalized AI Ethics, AI technologies can be directed towards global sustainable development.
  •  
7.
  • Chiou, Manolis, et al. (författare)
  • Variable Autonomy for Human-Robot Teaming (VAT)
  • 2023
  • Ingår i: HRI '23. - New York, NY, USA : ACM Digital Library. - 9781450399708 ; , s. 932-932
  • Konferensbidrag (refereegranskat)abstract
    • As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in/on/out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and longterm research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.
  •  
8.
  •  
9.
  • De Vos, Marina, et al. (författare)
  • Preface
  • 2022
  • Ingår i: Coordination, organizations, institutions, norms, and ethics for governance of multi-agent systems XIV. - : Springer. - 9783031166167 - 9783031166174 ; , s. v-vii
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)
  •  
10.
  • Methnani, Leila, et al. (författare)
  • Clash of the explainers : argumentation for context-appropriate explanations
  • 2024
  • Ingår i: Artificial Intelligence. ECAI 2023. - : Springer. - 9783031503955 - 9783031503962 ; , s. 7-23
  • Konferensbidrag (refereegranskat)abstract
    • Understanding when and why to apply any given eXplainable Artificial Intelligence (XAI) technique is not a straightforward task. There is no single approach that is best suited for a given context. This paper aims to address the challenge of selecting the most appropriate explainer given the context in which an explanation is required. For AI explainability to be effective, explanations and how they are presented needs to be oriented towards the stakeholder receiving the explanation. If—in general—no single explanation technique surpasses the rest, then reasoning over the available methods is required in order to select one that is context-appropriate. Due to the transparency they afford, we propose employing argumentation techniques to reach an agreement over the most suitable explainers from a given set of possible explainers.In this paper, we propose a modular reasoning system consisting of a given mental model of the relevant stakeholder, a reasoner component that solves the argumentation problem generated by a multi-explainer component, and an AI model that is to be explained suitably to the stakeholder of interest. By formalizing supporting premises—and inferences—we can map stakeholder characteristics to those of explanation techniques. This allows us to reason over the techniques and prioritise the best one for the given context, while also offering transparency into the selection decision.
  •  
11.
  • Methnani, Leila, et al. (författare)
  • Let Me Take Over : Variable Autonomy for Meaningful Human Control
  • 2021
  • Ingår i: Frontiers in Artificial Intelligence. - : Frontiers Media S.A.. - 2624-8212. ; 4
  • Tidskriftsartikel (refereegranskat)abstract
    • As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
  •  
12.
  • Methnani, Leila, et al. (författare)
  • Operationalising AI ethics : conducting socio-technical assessment
  • 2023
  • Ingår i: Human-Centered Artificial Intelligence. - : Springer. - 9783031243486 ; , s. 304-321
  • Konferensbidrag (refereegranskat)abstract
    • Several high profile incidents that involve Artificial Intelligence (AI) have captured public attention and increased demand for regulation. Low public trust and attitudes towards AI reinforce the need for concrete policy around its development and use. However, current guidelines and standards rolled out by institutions globally are considered by many as high-level and open to interpretation, making them difficult to put into practice. This paper presents ongoing research in the field of Responsible AI and explores numerous methods of operationalising AI ethics. If AI is to be effectively regulated, it must not be considered as a technology alone—AI is embedded in the fabric of our societies and should thus be treated as a socio-technical system, requiring multi-stakeholder involvement and employment of continuous value-based methods of assessment. When putting guidelines and standards into practice, context is of critical importance. The methods and frameworks presented in this paper emphasise this need and pave the way towards operational AI ethics.
  •  
13.
  • Methnani, Leila, et al. (författare)
  • Who's in charge here? a survey on trustworthy AI in variable autonomy robotic systems
  • 2024
  • Ingår i: ACM Computing Surveys. - : Association for Computing Machinery (ACM). - 0360-0300 .- 1557-7341. ; 56:7
  • Tidskriftsartikel (refereegranskat)abstract
    • This article surveys the Variable Autonomy (VA) robotics literature that considers two contributory elements to Trustworthy AI: transparency and explainability. These elements should play a crucial role when designing and adopting robotic systems, especially in VA where poor or untimely adjustments of the system's level of autonomy can lead to errors, control conflicts, user frustration, and ultimate disuse of the system. Despite this need, transparency and explainability is, to the best of our knowledge, mostly overlooked in VA robotics literature or is not considered explicitly. In this article, we aim to present and examine the most recent contributions to the VA literature concerning transparency and explainability. In addition, we propose a way of thinking about VA by breaking these two concepts down based on: the mission of the human-robot team; who the stakeholder is; what needs to be made transparent or explained; why they need it; and how it can be achieved. Last, we provide insights and propose ways to move VA research forward. Our goal with this article is to raise awareness and inter-community discussions among the Trustworthy AI and the VA robotics communities.
  •  
14.
  • Pedroza, Gabriel, et al. (författare)
  • The IJCAI-23 joint workshop on artificial intelligence safety and safe reinforcement learning (AISafety-SafeRL2023)
  • 2023
  • Ingår i: Proceedings of the IJCAI-23 joint workshop on artificial intelligence safety and safe reinforcement learning (AISafety-SafeRL 2023) co-located with the 32nd international joint conference on artificial intelligence (IJACAI2023). - : CEUR-WS.
  • Bokkapitel (övrigt vetenskapligt/konstnärligt)abstract
    • We summarize the IJCAI-23 Joint Workshop on Artificial Intelligence Safety and Safe Reinforcement Learning (AISafety-SafeRL2023)1, held at the 32nd International Joint Conference on Artificial Intelligence (IJCAI-23) on August 21-22, 2023 in Macau, China.
  •  
15.
  •  
16.
  • Sartori, Laura, et al. (författare)
  • A sociotechnical perspective for the future of AI : narratives, inequalities, and human control
  • 2022
  • Ingår i: Ethics and Information Technology. - : Springer. - 1388-1957 .- 1572-8439. ; 24:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Different people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.
  •  
17.
  • Theodorou, Andreas, Dr, et al. (författare)
  • Good AI for good : how AI strategies of the Nordic countries address the sustainable development goals
  • 2022
  • Ingår i: Adverse impacts and collateral effects of artificial intelligence technologies 2022. - : CEUR-WS. ; , s. 46-53
  • Konferensbidrag (refereegranskat)abstract
    • Developed and used responsibly Artificial Intelligence (AI) is a force for global sustainable development. Given this opportunity, we expect that the many of the existing guidelines and recommendations for trustworthy or responsible AI will provide explicit guidance on how AI can contribute to the achievement of United Nations' Sustainable Development Goals (SDGs). This would in particular be the case for the AI strategies of the Nordic countries, at least given their high ranking and overall political focus when it comes to the achievement of the SDGs. In this paper, we present an analysis of existing AI recommendations from 10 different countries or organisations based on topic modelling techniques to identify how much these strategy documents refer to the SDGs. The analysis shows no significant difference on how much these documents refer to SDGs. Moreover, the Nordic countries are not different from the others albeit their long-term commitment to SDGs. More importantly, references to gender equality (SDG 5) and inequality (SDG 10), as well as references to environmental impact of AI development and use, and in particular the consequences for life on earth, are notably missing from the guidelines.
  •  
18.
  • Theodorou, Andreas, Dr, et al. (författare)
  • Responsible AI at work : incorporating human values
  • 2024
  • Ingår i: Handbook of artificial intelligence at work. - : Edward Elgar Publishing. - 9781800889972 - 9781800889965 ; , s. 32-46
  • Bokkapitel (refereegranskat)
  •  
19.
  • Vinuesa, Ricardo, et al. (författare)
  • A socio-technical framework for digital contact tracing
  • 2020
  • Ingår i: Results in Engineering (RINENG). - : Elsevier B.V.. - 2590-1230. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • In their efforts to tackle the COVID-19 crisis, decision makers are considering the development and use of smartphone applications for contact tracing. Even though these applications differ in technology and methods, there is an increasing concern about their implications for privacy and human rights. Here we propose a framework to evaluate their suitability in terms of impact on the users, employed technology and governance methods. We illustrate its usage with three applications, and with the European Data Protection Board (EDPB) guidelines, highlighting their limitations.
  •  
20.
  • Winfield, Alan F. T., et al. (författare)
  • IEEE P7001 : A Proposed Standard on Transparency
  • 2021
  • Ingår i: Frontiers in Robotics and AI. - : Frontiers Media S.A.. - 2296-9144. ; 8
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper describes IEEE P7001, a new draft standard on transparency of autonomous systems1. In the paper, we outline the development and structure of the draft standard. We present the rationale for transparency as a measurable, testable property. We outline five stakeholder groups: users, the general public and bystanders, safety certification agencies, incident/accident investigators and lawyers/expert witnesses, and explain the thinking behind the normative definitions of “levels” of transparency for each stakeholder group in P7001. The paper illustrates the application of P7001 through worked examples of both specification and assessment of fictional autonomous systems.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-20 av 20
Typ av publikation
konferensbidrag (8)
tidskriftsartikel (7)
bokkapitel (3)
samlingsverk (redaktörskap) (1)
proceedings (redaktörskap) (1)
Typ av innehåll
refereegranskat (15)
övrigt vetenskapligt/konstnärligt (5)
Författare/redaktör
Theodorou, Andreas, ... (20)
Dignum, Virginia, Pr ... (8)
Methnani, Leila (5)
Aler Tubella, Andrea ... (4)
Nieves, Juan Carlos, ... (4)
Vinuesa, Ricardo (2)
visa fler...
Dignum, Frank (2)
McDermid, John (2)
Brännström, Mattias (2)
Wortham, Robert H. (2)
Chiou, Manolis (2)
Booth, Serena (2)
De Vos, Marina (2)
Hernandez-Orallo, Jo ... (2)
Pedroza, Gabriel (2)
Chen, Xin Cynthia (2)
Huang, Xiaowei (2)
Mallah, Richard (2)
Castillo-Effen, Maur ... (2)
Rossi, Francesca (1)
Lacerda, Bruno (1)
Baum, Kevin (1)
Bryson, Joanna (1)
Grobelnik, Marko (1)
Hoos, Holger (1)
Irgens, Morten (1)
Lukowicz, Paul (1)
Muller, Catelijne (1)
Shawe-Taylor, John (1)
Bogani, Ronny (1)
Arnaboldi, Luca (1)
Hastie, Helen (1)
Rothfuß, Simon (1)
Battaglini, Manuela (1)
Liu, Anqi (1)
Antoniades, Andreas (1)
Matragkas, Nikolaos (1)
Espinoza, Huascar (1)
Bossens, David (1)
Koenighofer, Bettina (1)
Tschiatschek, Sebast ... (1)
Sartori, Laura (1)
Dignum, Vanessa (1)
Winfield, Alan F. T. (1)
Dennis, Louise A. (1)
Egawa, Takashi (1)
Jacobs, Naomi (1)
Muttram, Roderick I. (1)
Olszewska, Joanna I. (1)
Rajabiyazdi, Fahimeh (1)
visa färre...
Lärosäte
Umeå universitet (20)
Kungliga Tekniska Högskolan (2)
Språk
Engelska (20)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (19)
Teknik (5)
Samhällsvetenskap (4)
Humaniora (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy