SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:ltu-92146"
 

Search: onr:"swepub:oai:DiVA.org:ltu-92146" > Explainable Text Cl...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Explainable Text Classification Model for COVID-19 Fake News Detection

Ahmed, Mumtahina (author)
Department of Computer Science and Engineering, Port City International University, Chittagong, Bangladesh
Hossain, Mohammad Shahadat (author)
Department of Computer Science and Engineering, University of Chittagong, Chittagong, Bangladesh
Islam, Raihan Ul, 1981- (author)
Luleå tekniska universitet,Datavetenskap
show more...
Andersson, Karl, 1970- (author)
Luleå tekniska universitet,Datavetenskap
show less...
 (creator_code:org_t)
Innovative Information Science & Technology Research Group, 2022
2022
English.
In: Journal of Internet Services and Information Security (JISIS). - : Innovative Information Science & Technology Research Group. - 2182-2069 .- 2182-2077. ; 12:2, s. 51-69
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.

Subject headings

SAMHÄLLSVETENSKAP  -- Ekonomi och näringsliv -- Företagsekonomi (hsv//swe)
SOCIAL SCIENCES  -- Economics and Business -- Business Administration (hsv//eng)
NATURVETENSKAP  -- Data- och informationsvetenskap -- Datavetenskap (hsv//swe)
NATURAL SCIENCES  -- Computer and Information Sciences -- Computer Sciences (hsv//eng)

Keyword

fake news
COVID-19
Explainable AI
LIME
BiLSTM
Pervasive Mobile Computing
Distribuerade datorsystem

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view