SwePub
Sök i LIBRIS databas

  Extended search

onr:"swepub:oai:DiVA.org:mdh-46634"
 

Search: onr:"swepub:oai:DiVA.org:mdh-46634" > Non-contact-based d...

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Non-contact-based driver's cognitive load classification using physiological and vehicular parameters

Rahman, Hamidur (author)
Mälardalens högskola,Inbyggda system
Ahmed, Mobyen Uddin, Dr, 1976- (author)
Mälardalens högskola,Inbyggda system
Barua, Shaibal (author)
Mälardalens högskola,Inbyggda system
show more...
Begum, Shahina, 1977- (author)
Mälardalens högskola,Inbyggda system
show less...
 (creator_code:org_t)
ELSEVIER SCI LTD, 2020
2020
English.
In: Biomedical Signal Processing and Control. - : ELSEVIER SCI LTD. - 1746-8094 .- 1746-8108. ; 55
  • Journal article (peer-reviewed)
Abstract Subject headings
Close  
  • Classification of cognitive load for vehicular drivers is a complex task due to underlying challenges of the dynamic driving environment. Many previous works have shown that physiological sensor signals or vehicular data could be a reliable source to quantify cognitive load. However, in driving situations, one of the biggest challenges is to use a sensor source that can provide accurate information without interrupting diverging tasks. In this paper, instead of traditional wire-based sensors, non-contact camera and vehicle data are used that have no physical contact with the driver and do not interrupt driving. Here, four machine learning algorithms, logistic regression (LR), support vector machine (SVM), linear discriminant analysis (LDA) and neural networks (NN), are investigated to classify the cognitive load using the collected data from a driving simulator study. In this paper, physiological parameters are extracted from facial video images, and vehicular parameters are collected from controller area networks (CAN). The data collection was performed in close collaboration with industrial partners in two separate studies, in which study-1 was designed with a 1-back task and study-2 was designed with both 1-back and 2-back task. The goal of the experiment is to investigate how accurately the machine learning algorithms can classify drivers' cognitive load based on the extracted features in complex dynamic driving environments. According to the results, for the physiological parameters extracted from the facial videos, the LR model with logistic function outperforms the other three classification methods. Here, in study-1, the achieved average accuracy for the LR classifier is 94% and in study-2 the average accuracy is 82%. In addition, the classification accuracy for the collected physiological parameters was compared with reference wire-sensor signals. It is observed that the classification accuracies between the sensor and the camera are very similar; however, better accuracy is achieved with the camera data due to having lower artefacts than the sensor data. 

Subject headings

TEKNIK OCH TEKNOLOGIER  -- Elektroteknik och elektronik -- Datorsystem (hsv//swe)
ENGINEERING AND TECHNOLOGY  -- Electrical Engineering, Electronic Engineering, Information Engineering -- Computer Systems (hsv//eng)

Keyword

Non-contact
Physiological parameters
Vehicular parameters
Cognitive load
Classification
Logistic regression
Support vector machine
Decision tree

Publication and Content Type

ref (subject category)
art (subject category)

Find in a library

To the university's database

  • 1 of 1
  • Previous record
  • Next record
  •    To hitlist

Find more in SwePub

By the author/editor
Rahman, Hamidur
Ahmed, Mobyen Ud ...
Barua, Shaibal
Begum, Shahina, ...
About the subject
ENGINEERING AND TECHNOLOGY
ENGINEERING AND ...
and Electrical Engin ...
and Computer Systems
Articles in the publication
Biomedical Signa ...
By the university
Mälardalen University

Search outside SwePub

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view