Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the British energy company, BP, and four competitors before and after the Deepwater Horizon oil spill of 2010. Drawing on Fuoli and Paradis's (2014) model of trust-repair discourse, we examine how ATTITUDE and ENGAGEMENT resources are strategically deployed by BP's CEO in the attempt to repair stakeholders' trust after the accident.
HUMANIORA -- Språk och litteratur -- Jämförande språkvetenskap och lingvistik (hsv//swe)
HUMANITIES -- Languages and Literature -- General Language Studies and Linguistics (hsv//eng)