SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Devinney Hannah) "

Sökning: WFRF:(Devinney Hannah)

  • Resultat 1-9 av 9
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Aler Tubella, Andrea, 1990-, et al. (författare)
  • ACROCPoLis : a descriptive framework for making sense of fairness
  • 2023
  • Ingår i: FAccT '23. - : ACM Digital Library. - 9781450372527 ; , s. 1014-1025
  • Konferensbidrag (refereegranskat)abstract
    • Fairness is central to the ethical and responsible development and use of AI systems, with a large number of frameworks and formal notions of algorithmic fairness being available. However, many of the fairness solutions proposed revolve around technical considerations and not the needs of and consequences for the most impacted communities. We therefore want to take the focus away from definitions and allow for the inclusion of societal and relational aspects to represent how the effects of AI systems impact and are experienced by individuals and social groups. In this paper, we do this by means of proposing the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects. The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit, as well as their interrelationships. This enables us to compare analogous situations, to highlight the differences in dissimilar situations, and to capture differing interpretations of the same situation by different stakeholders.
  •  
2.
  • Björklund, Henrik, et al. (författare)
  • Computer, enhence : POS-tagging improvements for nonbinary pronoun use in Swedish
  • 2023
  • Ingår i: Proceedings of the third workshop on language technology for equality, diversity, inclusion. - : The Association for Computational Linguistics. - 9789544520847 ; , s. 54-61
  • Konferensbidrag (refereegranskat)abstract
    • Part of Speech (POS) taggers for Swedish routinely fail for the third person gender-neutral pronoun hen, despite the fact that it has been a well-established part of the Swedish language since at least 2014. In addition to simply being a form of gender bias, this failure can have negative effects on other tasks relying on POS information. We demonstrate the usefulness of semi-synthetic augmented datasets in a case study, retraining a POS tagger to correctly recognize hen as a personal pronoun. We evaluate our retrained models for both tag accuracy and on a downstream task (dependency parsing) in a classicial NLP pipeline.Our results show that adding such data works to correct for the disparity in performance. The accuracy rate for identifying hen as a pronoun can be brought up to acceptable levels with only minor adjustments to the tagger’s vocabulary files. Performance parity to gendered pronouns can be reached after retraining with only a few hundred examples. This increase in POS tag accuracy also results in improvements for dependency parsing sentences containing hen.
  •  
3.
  • Björklund, Henrik, et al. (författare)
  • Improving Swedish part-of-speech tagging for hen
  • 2022
  • Konferensbidrag (refereegranskat)abstract
    • Despite the fact that the gender-neutral pro-noun hen was officially added to the Swedish language in 2014, state of the art part of speech taggers still routinely fail to identify it as a pronoun. We retrain both efselab and spaCy models with augmented (semi-synthetic) data, where instances of gendered pronouns are replaced by hen to correct for the lack of representation in the original training data. Our results show that adding such data works to correct for the disparity in performance
  •  
4.
  • Devinney, Hannah, et al. (författare)
  • Crime and Relationship : Exploring Gender Bias in NLP Corpora
  • 2020
  • Konferensbidrag (refereegranskat)abstract
    • Gender bias in natural language processing (NLP) tools, deriving from implicit human bias embedded in language data, is an important and complicated problem on the road to fair algorithms. We leverage topic modeling to retrieve documents associated with particular gendered categories, and discuss how exploring these documents can inform our understanding of the corpora we may use to train NLP tools. This is a starting point for challenging the systemic power structures and producing a justice-focused approach to NLP.
  •  
5.
  • Devinney, Hannah, 1995-, et al. (författare)
  • Developing a multilingual corpus of wikipedia biographies
  • 2023
  • Ingår i: International conference. Recent advances in natural language processing 2023, large language models for natural language processing. - Shoumen, Bulgaria : Incoma ltd.. - 9789544520922
  • Konferensbidrag (refereegranskat)abstract
    • For many languages, Wikipedia is the mostaccessible source of biographical information. Studying how Wikipedia describes the lives ofpeople can provide insights into societal biases, as well as cultural differences more generally. We present a method for extracting datasetsof Wikipedia biographies. The accompanying codebase is adapted to English, Swedish, Russian, Chinese, and Farsi, and is extendable to other languages. We present an exploratory analysis of biographical topics and gendered patterns in four languages using topic modelling and embedding clustering. We find similarities across languages in the types of categories present, with the distribution of biographies concentrated in the language’s core regions. Masculine terms are over-represented and spread out over a wide variety of topics. Feminine terms are less frequent and linked to more constrained topics. Non-binary terms are nearly non-represented.
  •  
6.
  • Devinney, Hannah, 1995- (författare)
  • Gender and representation : investigations of bias in natural language processing
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Natural Language Processing (NLP) technologies are a part of our every day realities. They come in forms we can easily see as ‘language technologies’ (auto-correct, translation services, search results) as well as those that fly under our radar (social media algorithms, 'suggested reading' recommendations on news sites, spam filters). NLP fuels many other tools under the Artificial Intelligence umbrella – such as algorithms approving for loan applications – which can have major material effects on our lives. As large language models like ChatGPT have become popularized, we are also increasingly exposed to machine-generated texts.Machine Learning (ML) methods, which most modern NLP tools rely on, replicate patterns in their training data. Typically, these language data are generated by humans, and contain both overt and underlying patterns that we consider socially undesirable, comprising stereotypes and other reflections of human prejudice. Such patterns (often termed 'bias') are picked up and repeated, or even made more extreme, by ML systems. Thus, NLP technologies become a part of the linguistic landscapes in which we humans transmit stereotypes and act on our prejudices. They may participate in this transmission by, for example, translating nurses as women (and doctors as men) or systematically preferring to suggest promoting men over women. These technologies are tools in the construction of power asymmetries not only through the reinforcement of hegemony, but also through the distribution of material resources when they are included in decision-making processes such as screening job applications.This thesis explores gendered biases, trans and nonbinary inclusion, and queer representation within NLP through a feminist and intersectional lens. Three key areas are investigated: the ways in which “gender” is theorized and operationalized by researchers investigating gender bias in NLP; gendered associations within datasets used for training language technologies; and the representation of queer (particularly trans and nonbinary) identities in the output of both low-level NLP models and large language models (LLMs). The findings indicate that nonbinary people/genders are erased by both bias in NLP tools/datasets, and by research/ers attempting to address gender biases. Men and women are also held to cisheteronormative standards (and stereotypes), which is particularly problematic when considering the intersection of gender and sexuality. Although it is possible to mitigate some of these issues in particular circumstances, such as addressing erasure by adding more examples of nonbinary language to training data, the complex nature of the socio-technical landscape which NLP technologies are a part of means that simple fixes may not always be sufficient. Additionally, it is important that ways of measuring and mitigating 'bias' remain flexible, as our understandings of social categories, stereotypes and other undesirable norms, and 'bias' itself will shift across contexts such as time and linguistic setting. 
  •  
7.
  • Devinney, Hannah, et al. (författare)
  • Semi-Supervised Topic Modeling for Gender Bias Discovery in English and Swedish
  • 2020
  • Ingår i: Proceedings of the Second Workshop on Gender Bias in Natural Language Processing. - : Association for Computational Linguistics. ; , s. 79-92
  • Konferensbidrag (refereegranskat)abstract
    • Gender bias has been identified in many models for Natural Language Processing, stemming from implicit biases in the text corpora used to train the models. Such corpora are too large to closely analyze for biased or stereotypical content. Thus, we argue for a combination of quantitative and qualitative methods, where the quantitative part produces a view of the data of a size suitable for qualitative analysis. We investigate the usefulness of semi-supervised topic modeling for the detection and analysis of gender bias in three corpora (mainstream news articles in English and Swedish, and LGBTQ+ web content in English). We compare differences in topic models for three gender categories (masculine, feminine, and nonbinary or neutral) in each corpus. We find that in all corpora, genders are treated differently and that these differences tend to correspond to hegemonic ideas of gender.
  •  
8.
  • Devinney, Hannah, 1995-, et al. (författare)
  • Theories of Gender in Natural Language Processing
  • 2022
  • Ingår i: Proceedings of the fifth annual ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT'22). - New York, NY, USA : ACM.
  • Konferensbidrag (refereegranskat)abstract
    • The rise of concern around Natural Language Processing (NLP) technologies containing and perpetuating social biases has led to a rich and rapidly growing area of research. Gender bias is one of the central biases being analyzed, but to date there is no comprehensive analysis of how “gender” is theorized in the field. We survey nearly 200 articles concerning gender bias in NLP to discover how the field conceptualizes gender both explicitly (e.g. through definitions of terms) and implicitly (e.g. through how gender is operationalized in practice). In order to get a better idea of emerging trajectories of thought, we split these articles into two sections by time.We find that the majority of the articles do not make their theo- rization of gender explicit, even if they clearly define “bias.” Almost none use a model of gender that is intersectional or inclusive of non- binary genders; and many conflate sex characteristics, social gender, and linguistic gender in ways that disregard the existence and expe- rience of trans, nonbinary, and intersex people. There is an increase between the two time-sections in statements acknowledging that gender is a complicated reality, however, very few articles manage to put this acknowledgment into practice. In addition to analyzing these findings, we provide specific recommendations to facilitate interdisciplinary work, and to incorporate theory and methodol- ogy from Gender Studies. Our hope is that this will produce more inclusive gender bias research in NLP.
  •  
9.
  • Devinney, Hannah, et al. (författare)
  • Theories of Gender in Natural Language Processing
  • 2022
  • Konferensbidrag (refereegranskat)abstract
    • The rise of concern around Natural Language Processing (NLP) technologies containing and perpetuating social biases has led to a rich and rapidly growing area of research. Gender bias is one of the central biases being analyzed, but to date there is no comprehensive analysis of how “gender” is theorized in the field. We survey nearly 200 articles concerning gender bias in NLP to discover how the field conceptualizes gender both explicitly (e.g. through definitions of terms) and implicitly (e.g. through how gender is operationalized in practice). In order to get a better idea of emerging trajectories of thought, we split these articles into two sections by time.We find that the majority of the articles do not make their theo- rization of gender explicit, even if they clearly define “bias.” Almost none use a model of gender that is intersectional or inclusive of non- binary genders; and many conflate sex characteristics, social gender, and linguistic gender in ways that disregard the existence and expe- rience of trans, nonbinary, and intersex people. There is an increase between the two time-sections in statements acknowledging that gender is a complicated reality, however, very few articles manage to put this acknowledgment into practice. In addition to analyzing these findings, we provide specific recommendations to facilitate interdisciplinary work, and to incorporate theory and methodol- ogy from Gender Studies. Our hope is that this will produce more inclusive gender bias research in NLP.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-9 av 9

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy