SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Razmetaeva Yulia) "

Search: WFRF:(Razmetaeva Yulia)

  • Result 1-10 of 12
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Colonna, Liane, et al. (author)
  • WASP-HS Community Reference Meeting: Challenges and Opportunities of Regulating AI
  • 2022
  • Reports (other academic/artistic)abstract
    • Main Findings:AI systems are increasingly being used to shift decisions made by humans over to automated systems, potentially limiting the space for democratic participation. The risk that AI erodes democracy is exacerbated where most people are excluded from the ownership and production of AI technologies that will impact them.AI learns through datasets but, very often, that data excludes key parts of the population. Where marginalized groups are considered, datasets often contain derogatory terms, or exclude explanatory contextual information, that is hard to accurately categorise in a format that AI can process. Resulting biases within AI design raise concerns as to the quality and representativeness of AI-based decisions and their impact on society.There is very little two-way communication between the developers and users of AI-technologies such that the latter function only as personal data providers. Being largely excluded from the development of AI’s role in human decision-making, everyday individuals may feel more marginalized and disinterested in building a healthy and sustainable society.Yet, AI’s capacity for seeing patterns in big data provides new ways to reach parts of the population excluded from traditional policymaking. It can serve to identify structural discrimination and include information from those otherwise ignored in important decisions. AI could enhance public participation by both providing decision-makers with better data and helping to communicate complex decisions – and their consequences – to wider parts of the population.
  •  
3.
  • Dignum, Virginia, et al. (author)
  • On the importance of AI research beyond disciplines : establishing guidelines
  • 2024
  • Reports (other academic/artistic)abstract
    • Artificial intelligence (AI) has evolved into a prominent player in various academic disciplines, transforming research approaches and knowledge generation. This paper explores the growing influence of AI across diverse fields and advocates for meaningful interdisciplinary AI research. It introduces the concept of "agonistic-antagonistic" interdisciplinary research, emphasizing a departure from conventional bridge-building approaches. Motivated by the need to address complex societal challenges, the paper calls for novel evaluation mechanisms that prioritize societal impact over traditional academic metrics. It stresses the importance of collaboration, challenging current systems that prioritize competition and individual excellence. The paper offers guiding principles for creating collaborative and co-productive interdisciplinary AI research environments, welcoming researchers to engage in discussions and contribute to the future of interdisciplinary AI research.
  •  
4.
  • Laulhé Shaelou, Stéphanie, et al. (author)
  • Challenges to Fundamental Human Rights in the age of Artificial Intelligence Systems : shaping the digital legal order while upholding Rule of Law principles and European values
  • 2024
  • In: ERA Forum. - : Springer. - 1612-3093 .- 1863-9038.
  • Journal article (peer-reviewed)abstract
    • Recently, the concept of the ‘European digital legal order’ seems to have gained more importance than the overarching concept of European legal order, of which the former is arguably a modern manifestation. The European legal order traditionally entails a set of fundamental human rights, Rule of Law principles and Democratic values as enshrined in the multinational legal order. From maintaining the Rule of Law derive the sustainability of Democratic values, and freedoms under the law enshrined in fundamental human rights. To the extent that the European digital legal order is the manifestation of the European legal order in the modern digital world, the fundamental question of the nature, scope and upholding of fundamental human rights, Rule of Law principles and Democratic values remains. Without disputing the need for digital transformation and its proper regulation, this paper will turn its attention to the current status of fundamental principles in the modern setting of democratic societies.Artificial Intelligence or Artificial Intelligence Systems are technologies that have and will have a serious impact on the European legal order at large. Without dismissing the value of a human-centered regulatory approach in the field of AI, in this paper we discuss why this may be difficult as digitisation and algorithmisation deepen. This paper reviews the regulatory framework of AI and proposes potential new/renewed/modernised rights that should enhance and/or supplement the current catalogue of fundamental human rights, as contained inter alia in the EU Charter and the ECHR. This paper also argues that regulatory standards regarding AI should be clearer and stronger as well as suggests a new wording of some standards. The particular new rights and/or their new wording will be suggested in the paper.
  •  
5.
  •  
6.
  • Razmetaeva, Yulia, et al. (author)
  • AI-Based Decisions and Disappearance of Law
  • 2022
  • In: Masaryk University Journal of Law and Technology. - : Masaryk University Press. - 1802-5943 .- 1802-5951. ; 16:2, s. 241-267
  • Journal article (peer-reviewed)abstract
    • Based on the philosophical anthropology of Paul Ricoeur, the article examines, using the example of AI-based decisions, how the concept of responsibility changes under the influence of artificial intelligence, what a reverse effect this conceptual shift has on our moral experience in general, and what consequences it has for law. The problem of AI-based decisions is said to illustrate the general trend of transformation of the concept of responsibility, which consists in replacing personal responsibility with a system of collective insurance against risks and disappearing of the capacity for responsibility from the structure of our experience, which, in turn, makes justice and law impossible.
  •  
7.
  • Razmetaeva, Yulia (author)
  • Algorithms in The Courts : Is There any Room For a Rule of Law?
  • 2022
  • In: Access to Justice in Eastern Europe. - : EAST EUROPEAN LAW RESEARCH CENTER. - 2663-0575 .- 2663-0583. ; 5:4, s. 87-100
  • Journal article (peer-reviewed)abstract
    • The rule of law is one of the fundamental pillars, along with human rights and democracy, which are affected by digitalisation today. Digital technologies used for the victory of populism, the manipulation of opinions, attacks on the independence of judges, and the general instrumentalisation of the law contribute significantly to the onset of negative consequences for the rule of law. Particularly dangerous are the far-reaching consequences of the algorithmisation of decision-making, including judicial decisions.The theoretical line of this research is based on the axiological method since the rule of law, democracy, and human rights are not only the foundations of legal order, but also values recognised in many societies and supported at the individual level. The study also relied on the phenomenological method in terms of assessing the experience of being influenced by digital technologies in public and private life. The practical line of research is based on the analysis of cases of the European Court of Human Rights and the Court of Justice to illustrate the changes in jurisprudence influenced by digitalisation.This article argues that the potential weakening of the rule of law could be related to the impact of certain technologies itself, and to their impact on certain values and foundations which is significantly aggravated.Judicial independence is affected since the judges are involved in digital interactions and are influenced by technologies along personal and public lines. That technologies often belong private sector but are perceived as neutral and infallible, which is highly predictive of court decisions. This leads to a distortion of the essence of legal certainty and a shift of trust from the courts to certain technologies and their creators.The possibility of algorithmic decision-making raises the question of whether the results will be fairer, or at least as fair, as those handed down by human judges. This entails two problems, the first of which is related to the task of interpreting the law and the second of which involves the need to explain decisions. Algorithms, often perceived as reliable, are not really capable of interpreting the law, and their ability to provide proper explanations for decisions or understand context and social practices is questionable. Even partial reliance on algorithms should be limited, given the growing inability to draw a line between the human and algorithmic roles in decision-making and determine who should be responsible for the decision and to what extent.
  •  
8.
  • Razmetaeva, Yulia (author)
  • Artificial intelligence and the end of justice
  • 2024
  • In: BioLaw Journal - Rivista di BioDiritto. - : Università di Trento. - 2284-4503. ; :1, s. 345-365
  • Journal article (peer-reviewed)abstract
    • Justice may be nearing its end with the advent of artificial intelligence. The ubiquitous penetration of AI, reinforced by its gaining legitimacy in non -obvious ways, is leading to a shift in the way humans perceive and apply the principles of justice. AI is incapable of truly understanding and interpreting the law, properly justifying decisions, or balancing rights and interests, which escapes public attention as people are excessively focused on its perceived perfection. Difficult to control, AI entails significant dependency of public institutions on private actors. Without undermining artificial intelligence as such, the article is calling to seriously rethink how far we are ready to go along this path.
  •  
9.
  • Razmetaeva, Yulia (author)
  • Opinions and Algorithms : Trust, Neutrality and Legitimacy
  • 2022
  • In: Filosofiya prava i zahalʹna teoriya prava [Philosophy of law and general theory of law]. - : Yaroslav Mudryi National Law University. - 2227-7153 .- 2707-7039. ; 1, s. 80-94
  • Journal article (peer-reviewed)abstract
    • The article is devoted to opinions and algorithms in the digital age, with a focus on how the manipulation of the former while using the latter affects trust and legitimacy. In addition, some attention is paid to the issue of neutrality, both in relation to unbiased opinions and in relation to unbiased technologies. The article raises questions about whether we can be self-determining and self-governing agents, especially in terms of how we make decisions and what opinions we trust, if we are skillfully led to this by algorithms or those behind them.Considering that not only corporations, but also governments today use technologies to influence our preferences and opinions, issues of autonomy and personal interests are touched upon, as well as the problem of nudging for certain behaviors that are defined as the best for people, including in a paternalistic sense. The article argues that the merging of everyday life with digital spaces and algorithmization form our experience as a fundamentally new one and does not contribute to the ability to separate imposed interests from really our own.The questions of how power and legitimacy are redistributed in a digital society dependent on algorithms are discussed in this study. It has been suggested that the impact on our preferences and management of them, when someone try to sell us certain opinions, may be more dangerous than selling us goods and services, since it destroys institutional and interpersonal trust and contributes to the erosion of public institutions. The study shows how some technologies, primarily algorithmic ones, which are not neutral either in their essence or in the way they are used by their creators and owners, contribute to growing addiction and impoverish human interaction and the ability to form meanings.
  •  
10.
  • Razmetaeva, Yulia (author)
  • Sacralization of AI
  • 2023
  • Other publication (pop. science, debate, etc.)
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view