Sökning: L773:9789177371670 >
Protecting Children...
-
Fast Lappalainen, Katarina,Jur. dr.1975-Stockholms universitet,Institutet för rättsinformatik (IRI)
(författare)
Protecting Children from Maltreatment with the Help of Artificial Intelligence : A Promise or a Threat to Children’s Rights?
- Artikel/kapitelEngelska2022
Förlag, utgivningsår, omfång ...
-
Uppsala :Iustus förlag,2022
-
electronicrdacarrier
Nummerbeteckningar
-
LIBRIS-ID:oai:DiVA.org:su-204140
-
https://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-204140URI
Kompletterande språkuppgifter
-
Språk:engelska
-
Sammanfattning på:engelska
Ingår i deldatabas
Klassifikation
-
Ämneskategori:ref swepub-contenttype
-
Ämneskategori:kap swepub-publicationtype
Anmärkningar
-
Predictive tools for child protection based on AI have, with varying success, been developed in different parts of the world. Some examples are the Vulnerable Children Predictive Risk Model of New Zealand from 2012, the Allegheny Family Screening Tool used by Allegheny County in Pennsylvania in the U.S. since 2014 and the Early Help Profiling System of Hackney County Council, London, U.K. In Scandinavia the Gladsaxe-model from Copenhagen, Denmark, seems to have been the first in the region as it was ready in 2018. In Sweden, the municipality of Norrtälje launched an AI tool to analyse cases based on preliminary warning referrals in 2020, to help in the detection of future cases of child maltreatment.It could be argued that such tools in general will help prevent maltreatment of children and enable social services to become more effective in their outreach work and thus the provision of support to children at high risk at a lower cost. The struggle to provide more effective child welfare and the reality of substantial funding cuts, common to the authorities in many European countries, increases the interest in such systems. Contrary to the promise and hopes for AI tools is the fact that the use of such tools for child protection, comes with multiple risks from a children’s rights perspective. This is certainly the case regarding the use of predictive risk modelling (PRM) in child welfare.The purpose of this paper is to give a preliminary overview and analysis regarding the design and use of AI tools to identify children at high risk of maltreatment in relation to relevant children’s rights. Are such child protection tools aligned with children’s rights as laid down in the UN Convention of the Rights of the Child (UNCRC), the European Convention of Human Rights and the EU Charter of Fundamental Rights? And to what extent?
Ämnesord och genrebeteckningar
-
SAMHÄLLSVETENSKAP Juridik Juridik och samhälle hsv//swe
-
SOCIAL SCIENCES Law Law and Society hsv//eng
-
AI-act
-
Allegheny Family Screening Tool
-
artificial intelligence
-
bias
-
black box problem
-
child maltreatment
-
child protection
-
childrens rights
-
child welfare
-
discrimination
-
Early Help Profiling System
-
early intervention
-
Gladsaxe-model
-
Natural Language Processing
-
Norrtälje-model
-
poverty profiling
-
predictive risk modelling
-
positive obligations
-
prohibition of torture or other inhuman or degrading treatment
-
referral bias
-
right to respect for family life
-
right to life
-
scoring
-
Status quo bias
-
UN Convention of the Rights of the Child
-
Vulnerable Children Predictive Risk Model
-
orosanmälan
-
rättsinformatik
-
Law and Information Technology
Biuppslag (personer, institutioner, konferenser, titlar ...)
-
Stockholms universitetInstitutet för rättsinformatik (IRI)
(creator_code:org_t)
Sammanhörande titlar
-
Ingår i:Law, AI and DigitalisationUppsala : Iustus förlag, s. 431-4669789177371670
Internetlänk
Hitta via bibliotek
Till lärosätets databas