SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:1388 1957 OR L773:1572 8439 srt2:(2010-2014)"

Sökning: L773:1388 1957 OR L773:1572 8439 > (2010-2014)

  • Resultat 1-4 av 4
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • de Vries, Katja (författare)
  • Identity, profiling algorithms and a world of ambient intelligence
  • 2010
  • Ingår i: Ethics and Information Technology. - : Springer Science and Business Media LLC. - 1388-1957 .- 1572-8439. ; 12:1, s. 71-85
  • Tidskriftsartikel (refereegranskat)abstract
    • The tendency towards an increasing integration of the informational web into our daily physical world (in particular in so-called Ambient Intelligent technologies which combine ideas derived from the field of Ubiquitous Computing, Intelligent User Interfaces and Ubiquitous Communication) is likely to make the development of successful profiling and personalization algorithms, like the ones currently used by internet companies such as Amazon, even more important than it is today. I argue that the way in which we experience ourselves necessarily goes through a moment of technical mediation. Because of this algorithmic profiling that thrives on continuous reconfiguration of identification should not be understood as a supplementary process which maps a pre-established identity that exists independently from the profiling practice. In order to clarify how the experience of one’s identity can become affected by such machine-profiling a theoretical exploration of identity is made (including Agamben’s understanding of an apparatus, Ricoeur’s distinction between idem- and ipse-identity, and Stiegler’s notion of a conjunctive–disjunctive relationship towards retentional apparatuses). Although it is clear that no specific predictions about the impact of Ambient Intelligent technologies can be made without taking more particulars into account, the theoretical concepts are used to describe three general scenarios about the way wherein the experience of identity might become affected. To conclude, I argue that the experience of one’s identity may affect whether the cases of unwarranted discrimination resulting from ubiquitous differentiations and identifications within an Ambient Intelligent environment, will become a matter of societal concern.
  •  
2.
  • Dodig Crnkovic, Gordana, 1955, et al. (författare)
  • Robots: ethical by design
  • 2012
  • Ingår i: Ethics and Information Technology. - : Springer Science and Business Media LLC. - 1388-1957 .- 1572-8439. ; 14:1, s. 61-71
  • Tidskriftsartikel (refereegranskat)abstract
    • Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.
  •  
3.
  • Fröding, Barbro, et al. (författare)
  • Why virtual friendship is no genuine friendship
  • 2012
  • Ingår i: Ethics and Information Technology. - : Springer Science and Business Media LLC. - 1388-1957 .- 1572-8439. ; 14:3, s. 201-207
  • Tidskriftsartikel (refereegranskat)abstract
    • Based on a modern reading of Aristotle's theory of friendship, we argue that virtual friendship does not qualify as genuine friendship. By 'virtual friendship' we mean the type of friendship that exists on the internet, and seldom or never is combined with real life interaction. A 'traditional friendship' is, in contrast, the type of friendship that involves substantial real life interaction, and we claim that only this type can merit the label 'genuine friendship' and thus qualify as morally valuable. The upshot of our discussion is that virtual friendship is what Aristotle might have described as a lower and less valuable form of social exchange.
  •  
4.
  • Hellström, Thomas, 1956- (författare)
  • On the moral responsibility of military robots
  • 2013
  • Ingår i: Ethics and Information Technology. - Dordrecht : Springer. - 1388-1957 .- 1572-8439. ; 15:2, s. 99-107
  • Tidskriftsartikel (refereegranskat)abstract
    • This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-4 av 4

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy