SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Rahman Hamidur Doctoral Student 1984 ) "

Search: WFRF:(Rahman Hamidur Doctoral Student 1984 )

  • Result 1-6 of 6
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Degas, A., et al. (author)
  • A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management : Current Trends and Development with Future Research Trajectory
  • 2022
  • In: Applied Sciences. - : MDPI. - 2076-3417. ; 12:3
  • Research review (peer-reviewed)abstract
    • Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by international organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI), which is one of the most researched topics in computer science, has not quite reached end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and the limitations, then synthesise the findings into a conceptual framework, named the DPP (Descriptive, Predictive, Prescriptive) model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
  •  
2.
  • Rahman, Hamidur, Doctoral Student, 1984-, et al. (author)
  • Artificial Intelligence-Based Life Cycle Engineering in Industrial Production : A Systematic Literature Review
  • 2022
  • In: IEEE Access. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2169-3536. ; 10, s. 133001-133015
  • Research review (peer-reviewed)abstract
    • For the last few years, cases of applying artificial intelligence (AI) to engineering activities towards sustainability have been reported. Life Cycle Engineering (LCE) provides a potential to systematically reach higher and productivity levels, owing to its holistic perspective and consideration of economic and environmental targets. To address the current gap to more systematic deployment of AI with LCE (AI-LCE) we have performed a systematic literature review emphasizing the three aspects:(1) the most prevalent AI techniques, (2) the current AI-improved LCE subfields and (3) the subfields with highly enhanced by AI. A specific set of inclusion and exclusion criteria were used to identify and select academic papers from several fields, i.e. production, logistics, marketing and supply chain and after the selection process described in the paper we ended up with 42 scientific papers. The study and analysis show that there are many AI-LCE papers addressing Sustainable Development Goals mainly addressing: Industry, Innovation, and Infrastructure; Sustainable Cities and Communities; and Responsible Consumption and Production. Overall, the papers give a picture of diverse AI techniques used in LCE. Production design and Maintenance and Repair are the top explored LCE subfields whereas logistics and Procurement are the least explored subareas. Research in AI-LCE is concentrated in a few dominating countries and especially countries with a strong research funding and focus on Industry 4.0; Germany is standing out with numbers of publications. The in-depth analysis of selected and relevant scientific papers are helpful in getting a more correct picture of the area which enables a more systematic approach to AI-LCE in the future.
  •  
3.
  • Rahman, Hamidur, Doctoral Student, 1984- (author)
  • Artificial Intelligence for Non-Contact-Based Driver Health Monitoring
  • 2021
  • Doctoral thesis (other academic/artistic)abstract
    • In clinical situations, a patient’s physical state is often monitored by sensors attached to the patient, and medical staff are alerted if the patient’s status changes in an undesirable or life-threatening direction. However, in unsupervised situations, such as when driving a vehicle, connecting sensors to the driver is often troublesome and wired sensors may not produce sufficient quality due to factors such as movement and electrical disturbance. Using a camera as a non-contact sensor to extract physiological parameters based on video images offers a new paradigm for monitoring a driver’s health and mental state. Due to the advanced technical features in modern vehicles, driving is now faster, safer and more comfortable than before. To enhance transport security (i.e. to avoid unexpected traffic accidents), it is necessary to consider a vehicle driver as a part of the traffic environment and thus monitor the driver’s health and mental state. Such a monitoring system is commonly developed based on two approaches: driving behaviour-based and physiological parameters-based.This research work demonstrates a non-contact approach that classifies a driver’s cognitive load based on physiological parameters through a camera system and vehicular data collected from control area networks considering image processing, computer vision, machine learning (ML) and deep learning (DL). In this research, a camera is used as a non-contact sensor and pervasive approach for measuring and monitoring the physiological parameters. The contribution of this research study is four-fold: 1) Feature extraction approach to extract physiological parameters (i.e. heart rate [HR], respiration rate [RR], inter-beat interval [IBI], heart rate variability [HRV] and oxygen saturation [SpO2]) using a camera system in several challenging conditions (i.e. illumination, motion, vibration and movement); 2) Feature extraction based on eye-movement parameters (i.e. saccade and fixation); 3) Identification of key vehicular parameters and extraction of useful features from lateral speed (SP), steering wheel angle (SWA), steering wheel reversal rate (SWRR), steering wheel torque (SWT), yaw rate (YR), lanex (LAN) and lateral position (LP); 4) Investigation of ML and DL algorithms for a driver’s cognitive load classification. Here, ML algorithms (i.e. logistic regression [LR], linear discriminant analysis [LDA], support vector machine [SVM], neural networks [NN], k-nearest neighbours [k-NN], decision tree [DT]) and DL algorithms (i.e. convolutional neural networks [CNN], long short-term memory [LSTM] networks and autoencoders [AE]) are used. One of the major contributions of this research work is that physiological parameters were extracted using a camera. According to the results, feature extraction based on physiological parameters using a camera achieved the highest correlation coefficient of .96 for both HR and SpO2 compared to a reference system. The Bland Altman plots showed 95% agreement considering the correlation between the camera and the reference wired sensors. For IBI, the achieved quality index was 97.5% considering a 100 ms R-peak error. The correlation coefficients for 13 eye-movement features between non-contact approach and reference eye-tracking system ranged from .82 to .95.For cognitive load classification using both the physiological and vehicular parameters, two separate studies were conducted: Study 1 with the 1-back task and Study 2 with the 2-back task. Finally, the highest average accuracy achieved in terms of cognitive load classification was 94% for Study 1 and 82% for Study 2 using LR algorithms considering the HRV parameter. The highest average classification accuracy of cognitive load was 92% using SVM considering saccade and fixation parameters. In both cases, k-fold cross-validation was used for the validation, where the value of k was 10. The classification accuracies using CNN, LSTM and autoencoder were 91%, 90%, and 90.3%, respectively. This research study shows such a non-contact-based approach using ML, DL, image processing and computer vision is suitable for monitoring a driver’s cognitive state.
  •  
4.
  • Rahman, Hamidur, Doctoral Student, 1984-, et al. (author)
  • Deep Learning in Remote Sensing : An Application to Detect Snow and Water in Construction Sites
  • 2021
  • In: Proceedings - 2021 4th International Conference on Artificial Intelligence for Industries, AI4I 2021. - 9781665434102 ; , s. 52-56
  • Conference paper (peer-reviewed)abstract
    • It is important for a construction and property development company to know weather conditions in their daily operation. In this paper, a deep learning-based approach is investigated to detect snow and rain conditions in construction sites using drone imagery. A Convolutional Neural Network (CNN) is developed for the feature extraction and performing classification on those features using machine learning (ML) algorithms. Well-known existing deep learning algorithms AlexNet and VGG16 models are also deployed and tested on the dataset. Results show that smaller CNN architecture with three convolutional layers was sufficient at extracting relevant features to the classification task at hand compared to the larger state-of-the-art architectures. The proposed model reached a top accuracy of 97.3% in binary classification and 96.5% while also taking rain conditions into consideration. It was also found that ML algorithms,i.e., support vector machine (SVM), logistic regression and k-nearest neighbors could be used as classifiers using feature maps extracted from CNNs and a top accuracy of 90% was obtained using SVM algorithms.
  •  
5.
  • Rahman, Hamidur, Doctoral Student, 1984-, et al. (author)
  • Driver’s Cognitive Load Classification based on Eye Movement through Facial Image using Machine Learning
  • Other publication (other academic/artistic)abstract
    • The driver's cognitive load is considered a good indication if the driver is alert or distracted but determing cognitive load is challenging and the acceptance of wire sensor solutions like EEG and ECG are not not preferred in real-world driving scenario. The recent development of image processing, machine learning, and decreasing hardware prices enables new solutions and there are several interesting features related to the driver’s eyes that are currently explored in research. Two different wireless sensor systems, one commercial giving eye position (SmartEye) and one Microsoft LifeCam Studio with resolution 1920 x 1080 were used for data collection. In this paper, two eye movement parameters, saccade, and fixation are investigated through facial images and 13 features are manually extracted. Five machine learning algorithms, support vector machine (SVM), logistic regression (LR), linear discriminant analysis (LDA), k-nearest neighbors (k-NN), and decision tree (DT), are investigated to classify the cognitive load. According to the results, the SVM model with linear kernel function outperforms the other four classification methods. Here, the achieved average accuracy is 92% using SVM. Again, three deep learning architectures, convolutional neural networks (CNN),  long short-term memory (LSTM), and autoencoder (AE) are designed both for automatic feature extraction and cognitive load classification. The results show that CNN architecture achieves the highest classification accuracy which is 91%.  Besides, the classification accuracy for the extracted eye movement parameters is compared with reference eye tracker signals. It is observed that the classification accuracies between the eye tracker and the camera are very similar. 
  •  
6.
  • Rahman, Hamidur, Doctoral Student, 1984-, et al. (author)
  • Vision-based driver’s cognitive load classification considering eye movement using machine learning and deep learning
  • 2021
  • In: Sensors. - : MDPI. - 1424-8220. ; 21:23
  • Journal article (peer-reviewed)abstract
    • Due to the advancement of science and technology, modern cars are highly technical, more activity occurs inside the car and driving is faster; however, statistics show that the number of road fatalities have increased in recent years because of drivers’ unsafe behaviors. Therefore, to make the traffic environment safe it is important to keep the driver alert and awake both in human and autonomous driving cars. A driver’s cognitive load is considered a good indication of alertness, but determining cognitive load is challenging and the acceptance of wire sensor solutions are not preferred in real-world driving scenarios. The recent development of a non-contact approach through image processing and decreasing hardware prices enables new solutions and there are several interesting features related to the driver’s eyes that are currently explored in research. This paper presents a vision-based method to extract useful parameters from a driver’s eye movement signals and manual feature extraction based on domain knowledge, as well as automatic feature extraction using deep learning architectures. Five machine learning models and three deep learning architectures are developed to classify a driver’s cognitive load. The results show that the highest classification accuracy achieved is 92% by the support vector machine model with linear kernel function and 91% by the convolutional neural networks model. This non-contact technology can be a potential contributor in advanced driver assistive systems. 
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-6 of 6

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view