SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Greenstein Stanley 1970 ) "

Sökning: WFRF:(Greenstein Stanley 1970 )

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  •  
2.
  •  
3.
  •  
4.
  • Greenstein, Stanley, 1970- (författare)
  • Elevating Legal Informatics in the Digital Age
  • 2021
  • Ingår i: Digital human sciences. - Stockholm : Stockholm University Press. - 9789176351475 - 9789176351451 - 9789176351444 ; , s. 155-180
  • Bokkapitel (refereegranskat)abstract
    • The widespread use of digital technologies within society incorporating elements of artificial intelligence (AI) is increasing at a phenomenal rate. This technology promises a multitude of advantages for the development of society. However, it also entails risks. A characteristic of AI technology is that, in addition to using knowledge from computer science, it is increasingly being combined with insights from within the cognitive sciences. This deeper insight into human behavior combined with technology increases the ability of those who control the technology to not only predict but also manipulate human behavior. A function of the law is to protect individuals and society from risks and vulnerabilities, including those associated with technology. The more complex the technologies applied in society, the more difficult it may be to identify the potential harms associated with the technologies. Consequently, it is worthwhile discussing the extent to which the law protects society from the risks associated with technology. In applying the law, the dominant method is the “traditional legal science method.” It incorporates a dogmatic approach and can be described as the default legal problem-solving mechanism utilized by legal students and practitioners. A question that arises is whether it is equipped to meet the new demands placed on law in an increasingly technocratic society. Attempting to frame the modern risks to society using a traditional legal science method alone is bound to provide an unsatisfactory result in that it fails to provide a complete picture of the problem. Simply put, modern societal problems that arise from the use of new digital technologies are multidimensional in nature and require a multidisciplinary solution. In other words, applying a restrictive legal approach may result in solutions that are logical from a purely legal perspective but that are out of step with reality, potentially resulting in unjust solutions. This chapter examines the increased digitalization of society from the legal perspective and also elevates the application of the legal informatics approach as a means of better aligning research within in legal science with other disciplines.
  •  
5.
  •  
6.
  • Greenstein, Stanley, 1970- (författare)
  • Our Humanity Exposed : Predictive Modelling in a Legal Context
  • 2017
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This thesis examines predictive modelling from the legal perspective. Predictive modelling is a technology based on applied statistics, mathematics, machine learning and artificial intelligence that uses algorithms to analyse big data collections, and identify patterns that are invisible to human beings. The accumulated knowledge is incorporated into computer models, which are then used to identify and predict human activity in new circumstances, allowing for the manipulation of human behaviour.Predictive models use big data to represent people. Big data is a term used to describe the large amounts of data produced in the digital environment. It is growing rapidly due mainly to the fact that individuals are spending an increasing portion of their lives within the on-line environment, spurred by the internet and social media. As individuals make use of the on-line environment, they part with information about themselves. This information may concern their actions but may also reveal their personality traits.Predictive modelling is a powerful tool, which private companies are increasingly using to identify business risks and opportunities. They are incorporated into on-line commercial decision-making systems, determining, among other things, the music people listen to, the news feeds they receive, the content people see and whether they will be granted credit. This results in a number of potential harms to the individual, especially in relation to personal autonomy.This thesis examines the harms resulting from predictive modelling, some of which are recognized by traditional law. Using the European legal context as a point of departure, this study ascertains to what extent legal regimes address the use of predictive models and the threats to personal autonomy. In particular, it analyses Article 8 of the European Convention on Human Rights (ECHR) and the forthcoming General Data Protection Regulation (GDPR) adopted by the European Union (EU). Considering the shortcomings of traditional legal instruments, a strategy entitled ‘empowerment’ is suggested. It comprises components of a legal and technical nature, aimed at levelling the playing field between companies and individuals in the commercial setting. Is there a way to strengthen humanity as predictive modelling continues to develop?
  •  
7.
  •  
8.
  • Greenstein, Stanley, 1970- (författare)
  • Preserving the rule of law in the era of artificial intelligence (AI)
  • 2022
  • Ingår i: Artificial Intelligence and Law. - : Springer Science and Business Media LLC. - 0924-8463 .- 1572-8382. ; 30:3, s. 291-323
  • Tidskriftsartikel (refereegranskat)abstract
    • The study of law and information technology comes with an inherent contradiction in that while technology develops rapidly and embraces notions such as internationalization and globalization, traditional law, for the most part, can be slow to react to technological developments and is also predominantly confined to national borders. However, the notion of the rule of law defies the phenomenon of law being bound to national borders and enjoys global recognition. However, a serious threat to the rule of law is looming in the form of an assault by technological developments within artificial intelligence (AI). As large strides are made in the academic discipline of AI, this technology is starting to make its way into digital decision-making systems and is in effect replacing human decision-makers. A prime example of this development is the use of AI to assist judges in making judicial decisions. However, in many circumstances this technology is a ‘black box’ due mainly to its complexity but also because it is protected by law. This lack of transparency and the diminished ability to understand the operation of these systems increasingly being used by the structures of governance is challenging traditional notions underpinning the rule of law. This is especially so in relation to concepts especially associated with the rule of law, such as transparency, fairness and explainability. This article examines the technology of AI in relation to the rule of law, highlighting the rule of law as a mechanism for human flourishing. It investigates the extent to which the rule of law is being diminished as AI is becoming entrenched within society and questions the extent to which it can survive in the technocratic society. 
  •  
9.
  • Mochaourab, Rami, et al. (författare)
  • Demonstrator on Counterfactual Explanations for Differentially Private Support Vector Machines
  • 2023
  • Ingår i: <em>Lecture Notes in Computer Science </em>. - Cham : Springer Science and Business Media Deutschland GmbH. - 9783031264214 - 9783031264221 ; , s. 662-666
  • Konferensbidrag (refereegranskat)abstract
    • We demonstrate the construction of robust counterfactual explanations for support vector machines (SVM), where the privacy mechanism that publicly releases the classifier guarantees differential privacy. Privacy preservation is essential when dealing with sensitive data, such as in applications within the health domain. In addition, providing explanations for machine learning predictions is an important requirement within so-called high risk applications, as referred to in the EU AI Act. Thus, the innovative aspects of this work correspond to studying the interaction between three desired aspects: accuracy, privacy, and explainability. The SVM classification accuracy is affected by the privacy mechanism through the introduced perturbations in the classifier weights. Consequently, we need to consider a trade-off between accuracy and privacy. In addition, counterfactual explanations, which quantify the smallest changes to selected data instances in order to change their classification, may become not credible when we have data privacy guarantees. Hence, robustness for counterfactual explanations is needed in order to create confidence about the credibility of the explanations. Our demonstrator provides an interactive environment to show the interplay between the considered aspects of accuracy, privacy, and explainability. 
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy