SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Campano Erik) "

Sökning: WFRF:(Campano Erik)

  • Resultat 1-5 av 5
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Campano, Erik, et al. (författare)
  • An ontology of gradualist machine ethics
  • 2023
  • Ingår i: 2023 Asia Conference on Cognitive Engineering and Intelligent Interaction (CEII). - : IEEE. - 9798350306965 - 9798350306972 ; , s. 88-95
  • Konferensbidrag (refereegranskat)abstract
    • Ethical gradualism is the idea that whether an entity possesses morality is not a yes-or-no question, but rather has answers on a gradual scale. Our paper contributes to the field of machine ethics by replacing the often used, and variously construed, concept of computer "morality"with the more specific concept of "moral relevance". This we define as "the characteristic of having some connection to the moral domain". Our definition requires that an entity's perceived moral relevance can be obtained as an aggregate of multi-axial, continuous-variable moral characteristics such as, but not limited to, patiency, responsibility, and autonomy. These characteristics can furthermore be broken down into concrete sub- (and sub-sub-, etc.) characteristics which are easier to measure in the real world. This gradualist model of perceived moral relevance can then be practically implemented through translation into Web Ontology Language. We depict moral relevance both graphically as a class hierarchy, and in computer code. Our implementation allows computers to recognize perceived moral relevance in other computers. This provides a basic architecture by which computers can learn perceived ethical behavior only by acting with one another. The implementation also makes possible a new kind of experimental moral psychology, in which researchers can compare gradual perceived moral relevance directly between humans and computers.
  •  
2.
  • Campano, Erik (författare)
  • Pierre Lévy's kansei philosophy as understood through human-computer interaction theories
  • 2022
  • Ingår i: 2022 10th International Conference on Affective Computing and Intelligent Interaction, (ACII). - : Institute of Electrical and Electronics Engineers (IEEE). - 9781665459082
  • Konferensbidrag (refereegranskat)abstract
    • French industrial designer Pierre Lévy has proposed a way to understand the philosophy behind kansei engineering. His account is perhaps the most detailed explanation of kansei philosophy in a langauge other than Japanese. Lévy's proposal draws on the ideas of twentieth century Kyoto School founder Kitarou Nishida-particularly Nishida's interest in phenomenology, and his concepts of action-intuition, pure experience, and basho Five particular elements of Lévy's explanation can be compared to fundamental concepts in theories from the discipline of human-computer interaction. These theories include, but are not limited to, Paul Dourish's embodied interaction, Pierre Rabardel's instrumental genesis, and Susanne BOdker's human-artifact model. Kansei philosophy is thereby characterizable with an entirely new vocabulary and analytical framework, arising from the human computer interaction literature. This new framework gives scholars in both Japanese and non-Japanese sociocultural settings a set of novel conceptual tools to understand kansei philosophy.
  •  
3.
  • Soma, Rebekka, et al. (författare)
  • Strengthening human autonomy in the era of autonomous technology
  • 2022
  • Ingår i: Scandinavian Journal of Information Systems. - : IRIS Association. - 0905-0167 .- 1901-0990. ; 34:2, s. 163-198
  • Tidskriftsartikel (refereegranskat)abstract
    • ‘Autonomous technologies’ refers to systems that make decisions without explicit human control or interaction. This conceptual paper explores the notion of autonomy by first exploring human autonomy, and then using this understanding to analyze how autonomous technology could or should be modelled. First, we discuss what human autonomy means. We conclude that it is the overall space for action—rather than the degree of control—and the actual choices, or number of choices, that constitutes human autonomy. Based on this, our second discussion leads us to suggest the term datanomous to denote technology that builds on, and is restricted by, its own data when operating autonomously. Our conceptual exploration brings forth a more precise definition of human autonomy and datanomous systems. Finally, we conclude this exploration by suggesting that human autonomy can be strengthened by datanomous technologies, but only if they support the human space for action. It is the purpose of human activity that determines if technology strengthens or weakens human autonomy.
  •  
4.
  • Zicari, Roberto V., et al. (författare)
  • Co-design of a trustworthy AI system in healthcare : deep learning based skin lesion classifier
  • 2021
  • Ingår i: Frontiers in Human Dynamics. - : Frontiers Media S.A.. - 2673-2726. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper documents how an ethically aligned co-design methodology ensures trustworthiness in the early design phase of an artificial intelligence (AI) system component for healthcare. The system explains decisions made by deep learning networks analyzing images of skin lesions. The co-design of trustworthy AI developed here used a holistic approach rather than a static ethical checklist and required a multidisciplinary team of experts working with the AI designers and their managers. Ethical, legal, and technical issues potentially arising from the future use of the AI system were investigated. This paper is a first report on co-designing in the early design phase. Our results can also serve as guidance for other early-phase AI-similar tool developments.
  •  
5.
  • Zicari, Roberto V., et al. (författare)
  • On assessing trustworthy AI in healthcare : Machine learning as a supportive tool to recognize cardiac arrest in emergency calls
  • 2021
  • Ingår i: Frontiers in Human Dynamics. - : Frontiers Media S.A.. - 2673-2726. ; 3
  • Tidskriftsartikel (refereegranskat)abstract
    • Artificial Intelligence (AI) has the potential to greatly improve the delivery of healthcare and other services that advance population health and wellbeing. However, the use of AI in healthcare also brings potential risks that may cause unintended harm. To guide future developments in AI, the High-Level Expert Group on AI set up by the European Commission (EC), recently published ethics guidelines for what it terms “trustworthy” AI. These guidelines are aimed at a variety of stakeholders, especially guiding practitioners toward more ethical and more robust applications of AI. In line with efforts of the EC, AI ethics scholarship focuses increasingly on converting abstract principles into actionable recommendations. However, the interpretation, relevance, and implementation of trustworthy AI depend on the domain and the context in which the AI system is used. The main contribution of this paper is to demonstrate how to use the general AI HLEG trustworthy AI guidelines in practice in the healthcare domain. To this end, we present a best practice of assessing the use of machine learning as a supportive tool to recognize cardiac arrest in emergency calls. The AI system under assessment is currently in use in the city of Copenhagen in Denmark. The assessment is accomplished by an independent team composed of philosophers, policy makers, social scientists, technical, legal, and medical experts. By leveraging an interdisciplinary team, we aim to expose the complex trade-offs and the necessity for such thorough human review when tackling socio-technical applications of AI in healthcare. For the assessment, we use a process to assess trustworthy AI, called 1Z-Inspection® to identify specific challenges and potential ethical trade-offs when we consider AI in practice.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-5 av 5

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy