SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Borger Linda 1976) "

Search: WFRF:(Borger Linda 1976)

  • Result 1-10 of 23
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Borger, Linda, 1976 (author)
  • Assessing interactional skills in a paired speaking test: Raters’ interpretation of the construct
  • 2019
  • In: Apples - Journal of Applied Language Studies. - 1457-9863. ; 13:1, s. 151-174
  • Journal article (peer-reviewed)abstract
    • The operationalization of interactional competence (IC) within the paired speaking test format allows for a range of interactional skills to be tested. However, in terms of assessment, challenges are posed with regard to the co-constructed nature of IC, making investigations into raters’ perceptions of the construct essential to inform test score interpretation. This qualitative study explores features of IC that raters attended to as they evaluated performances in a paired speaking test, part of a Swedish national test of English as a Foreign Language (EFL). Two groups of raters, 17 EFL teachers from Sweden, using national standards based on the Common European Framework of Reference for Languages (CEFR), and 14 raters from Finland and Spain, using CEFR scales, rated six audio-recorded paired performances, and provided written comments to explain their scores and account for salient features. The findings of the content analysis indicate that raters attended to th ree main interactional resources: topic development moves, turn-taking management, and interactive listening strategies. As part of the decision-making process, raters also considered the impact of test-takers’ interactional roles and how candidates’ performances were interrelated. In the paper, interaction strategies that were perceived as more or less successful by raters are highlighted. The findings have implications for our understanding of raters’ operationalization of IC in the context of paired speaking tests, and for the development of
  •  
3.
  • Borger, Linda, 1976 (author)
  • Bedömning av muntlig förmåga i engelska – om bedömarvariation och beslutsprocesser ur ett nationellt och europeiskt perspektiv
  • 2021
  • In: Language and Literature in Education 1: Forskarskolan FRAM – lärare forskar i de främmande språkens didaktik. - Stockholm : Stockholm University Press. - 9789176351314 - 9789176351291
  • Book chapter (peer-reviewed)abstract
    • Kapitlet behandlar bedömning av muntlig engelsk språkfärdighet med fokus på interaktion i ett nationellt prov för gymnasiet. Beslutsprocesser samt grad av samstämmighet mellan bedömare fokuseras i en studie av sjutton svenska lärare respektive fjorton europeiska bedömare, som utgick från samma muntliga exempel men använde olika skalor för sin bedömning.
  •  
4.
  •  
5.
  • Borger, Linda, 1976 (author)
  • Evaluating a high-stakes EFL speaking test: Teachers’ practices and views
  • 2019
  • In: Moderna Språk. - 2000-3560. ; 113:1, s. 25-57
  • Journal article (peer-reviewed)abstract
    • In the present paper, teachers’ implementation practices and views of practicality regarding a paired speaking test, part of a high-stakes national test of English as a foreign language (EFL) in the Swedish upper secondary school, were investigated. In Sweden, national tests are centrally developed but internally marked by teachers at the schools where they are administered. Two-hundred and sixty-seven teachers participated in a nation-wide online survey and answered closed and open-ended questions. The responses reflect how teachers implement and perceive the national speaking test in relation to purposes and guidelines. Furthermore, challenges relating to the implementation were also reported. The results showed that there were variations in how the national speaking test was implemented at the local school level. This has clear implications for standardisation, but must be considered in relation to the decentralised school system that the test is embedded in, which requires local decisions to be made and local responsibility to be taken. In addition, many teachers perceived that they did not receive enough support from the school management, indicating that clearer routines and administrative support are needed. Statistical tests were undertaken to explore potential differences related to certain background variables. It was found that school size accounted for some of the variation in teachers’ responses, with teachers at smaller schools perceiving the practical implementation of the oral tests to be more problematic and time-consuming. The paper concludes with a discussion of the implications of the findings for the practice of high-stakes speaking assessment programs, focusing on the educational context of the current investigation.
  •  
6.
  • Borger, Linda, 1976 (author)
  • Exploring rater variability in a foreign language speaking test
  • 2015
  • In: The twelfth Annual Conference of EALTA, Copenhagen, Denmark 28th - 31st of May, 2015.
  • Conference paper (other academic/artistic)abstract
    • This exploratory study aims to provide insight into the issue of rater variability in a paired speaking test, part of a mandatory Swedish national test of English. Six authentic conversations were rated by (1) a group of Swedish teachers of English (n = 17), using national performance standards, and (2) a group of external CEFR raters (n = 14), using scales from the CEFR, the latter to enable a tentative comparison between the Swedish language syllabus and the CEFR. Quantitative and qualitative methods were employed to analyse scores and raters’ justifications of these scores in the form of concurrent written comments. As expected, findings showed some variability of ratings as well as differences in consistency. Analyses of the written comments, using NVivo 10 software, indicate a wide array of features taken into account in raters’ holistic rating decisions, however with test takers’ linguistic and pragmatic competences, and interaction strategies the most salient. Raters also seemed to heed the same features, indicating considerable agreement regarding the construct. Furthermore, a tentative comparison between written comments and scores shows that raters notice fairly similar features across proficiency levels, but in some cases evaluate them differently. The results will be discussed in relation to issues of design, testing model and participating rater groups. Finally, the results will be considered with regard to the current policy of national assessment in Sweden, where teachers are responsible for rating their own students’ tests as well as for awarding their final subject grades.
  •  
7.
  • Borger, Linda, 1976, et al. (author)
  • How representative is the Swedish PISA sample? A comparison of PISA and register data
  • 2024
  • In: EDUCATIONAL ASSESSMENT EVALUATION AND ACCOUNTABILITY. - 1874-8597 .- 1874-8600.
  • Journal article (peer-reviewed)abstract
    • PISA aims to serve as a "global yardstick" for educational success, as measured by student performance. For comparisons to be meaningful across countries or over time, PISA samples must be representative of the population of 15-year-old students in each country. Exclusions and non-response can undermine this representativeness and potentially bias estimates of student performance. Unfortunately, testing the representativeness of PISA samples is typically infeasible due to unknown population parameters. To address this issue, we integrate PISA 2018 data with comprehensive Swedish registry data, which includes all students in Sweden. Utilizing various achievement measures, we find that the Swedish PISA sample significantly overestimates the achievement levels in Sweden. The observed difference equates to standardized effect sizes ranging from d = .19 to .28, corresponding to approximately 25 points on the PISA scale or an additional year of schooling. The paper concludes with a plea for more rigorous quality standards and their strict enforcement.
  •  
8.
  •  
9.
  • Borger, Linda, 1976 (author)
  • Interpreting scores of speaking - a study of rater profiles and ratings
  • 2014
  • In: The 6th Junior Researcher Meeting in Applied Linguistics 12th–14th May 2014, University of Jyväskylä, Finland.
  • Conference paper (other academic/artistic)abstract
    • A challenge with the paired speaking test format is scoring reliability. In order to interpret test scores from this kind of performance assessment, it is important to explore how raters reach their decisions. The present study aims to examine the rating of English oral proficiency in a paired speaking test, part of a Swedish na- tional test of English. The first group of raters are Swedish teachers of English (n = 17), who made individual ratings of six audio-recorded paired conversations (twelve performances) in relation to national standards. In addition, two groups of external raters from two European countries (n = 14) assessed the same conver- sations in relation to corresponding CEFR scales from the Manual for relating Language Examinations to the CEFR (Council of Europe, 2009), the latter with the additional aim of making a small-scale and tentative comparison between Swedish performance standards and the CEFR levels. The raters produced notes while listening, summary comments after listening, and scores. Different as- pects of content in the notes and summary comments are analysed to possibly identify features of the performances salient to the raters. Furthermore, scores are analysed to examine rater profiles and issues of consistency. Finally, the relationship between comments and scores is focused upon. Initial analyses of scores show that the rank ordering of performances as well as the degree of variability of ratings are very similar between the Swedish and CEFR raters. Also, rater profiles are prominent in both groups, with clear signs of harshness and leniency. This paper will briefly discuss the design of the study, and then present findings from analyses of rater comments and scores. Moreover, some attention will be paid to the comparison between the Swedish national standards and the CEFR scales.
  •  
10.
  • Borger, Linda, 1976 (author)
  • Investigating and Validating Spoken Interactional Competence: Rater Perspectives on a Swedish National Test of English
  • 2018
  • Doctoral thesis (other academic/artistic)abstract
    • This thesis aims to explore different aspects of validity evidence from the raters’ perspective in relation to a paired speaking test, part of a high-stakes national test of English as a Foreign Language (EFL) in the Swedish upper secondary school. Three empirical studies were undertaken with the purpose of highlighting (1) the scoring process, (2) the construct underlying the test format, and (3) the setting and test administration. In Study I and II, 17 teachers of English from Sweden, using national performance standards, and 14 raters from Finland and Spain, using scales from the Common European Framework of Reference for Languages (CEFR), rated six audio-recorded paired performances, and provided written comments to explain their scores and account for salient features. Inter-rater agreement was analysed using descriptive, correlational and reliability statistics, while content analysis was used to explore raters’ written comments. In Study III, 267 upper secondary teachers of English participated in a nation-wide online survey and answered questions about their administration and scoring practices as well as their views of practicality. The responses were analysed using descriptive statistics and tests of association. Study I revealed that raters observed a wide range of students’ oral competence, which is in line with the purpose of the test. With regard to inter-rater agreement, the statistics indicated certain degrees of variability. However, in general inter-rater consistency was acceptable, albeit with clear room for improvement. A small-scale, tentative comparison between the national EFL standards and the reference levels in the CEFR was also made. In Study II, raters’ interpretation of the construct of interactional competence was explored. The results showed that raters attended to three main interactional resources: topic development moves, turn-taking management, and interactive listening strategies. As part of the decision-making process, raters also considered the impact of test-takers’ interactional roles and how students’ performances were interrelated, which caused some challenges for rating. Study III investigated teachers’ implementation practices and views of practicality. The results revealed variations in how the national speaking test was implemented at the local level, which has clear implications for standardisation but must be considered in relation to the decentralised school system that the national tests are embedded in. In light of this, critical aspects of the setting, administration and scoring procedures of the national EFL speaking tests were highlighted and discussed. In the integrated discussion, the different aspects of validity evidence resulting from the empirical data are analysed in relation to a socio-cognitive framework for validating language tests (O’Sullivan & Weir, 2011; Weir, 2005). It is hoped that the thesis contributes to the field of speaking assessment in two ways: firstly by showing how a theoretical framework can be used to support the validation process, and secondly by providing a concrete example of validation of a high-stakes test, highlighting positive features as well as challenges to be addressed.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 23

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view