SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Islam Mir Riyanul Doctoral Student 1991 ) "

Sökning: WFRF:(Islam Mir Riyanul Doctoral Student 1991 )

  • Resultat 1-12 av 12
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Degas, A., et al. (författare)
  • A Survey on Artificial Intelligence (AI) and eXplainable AI in Air Traffic Management : Current Trends and Development with Future Research Trajectory
  • 2022
  • Ingår i: Applied Sciences. - : MDPI. - 2076-3417. ; 12:3
  • Forskningsöversikt (refereegranskat)abstract
    • Air Traffic Management (ATM) will be more complex in the coming decades due to the growth and increased complexity of aviation and has to be improved in order to maintain aviation safety. It is agreed that without significant improvement in this domain, the safety objectives defined by international organisations cannot be achieved and a risk of more incidents/accidents is envisaged. Nowadays, computer science plays a major role in data management and decisions made in ATM. Nonetheless, despite this, Artificial Intelligence (AI), which is one of the most researched topics in computer science, has not quite reached end users in ATM domain. In this paper, we analyse the state of the art with regards to usefulness of AI within aviation/ATM domain. It includes research work of the last decade of AI in ATM, the extraction of relevant trends and features, and the extraction of representative dimensions. We analysed how the general and ATM eXplainable Artificial Intelligence (XAI) works, analysing where and why XAI is needed, how it is currently provided, and the limitations, then synthesise the findings into a conceptual framework, named the DPP (Descriptive, Predictive, Prescriptive) model, and provide an example of its application in a scenario in 2030. It concludes that AI systems within ATM need further research for their acceptance by end-users. The development of appropriate XAI methods including the validation by appropriate authorities and end-users are key issues that needs to be addressed. © 2022 by the authors. Licensee MDPI, Basel, Switzerland.
  •  
2.
  • Gorospe, Joseba, et al. (författare)
  • Analyzing Inter-Vehicle Collision Predictions during Emergency Braking with Automated Vehicles
  • 2023
  • Ingår i: International Conference on Wireless and Mobile Computing, Networking and Communications. - : IEEE Computer Society. - 9798350336672 ; , s. 411-418
  • Konferensbidrag (refereegranskat)abstract
    • Automated Vehicles (AVs) require sensing and perception to integrate data from multiple sources, such as cameras, lidars, and radars, to operate safely and efficiently. Collaborative sensing through wireless vehicular communications can enhance this process. However, failures in sensors and communication systems may require the vehicle to perform a safe stop or emergency braking when encountering hazards. By identifying the conditions for being able to perform emergency braking without collisions, better automation models that also consider communications need to be developed. Hence, we propose to employ Machine Learning (ML) to predict inter-vehicle collisions during emergency braking by utilizing a comprehensive dataset that has been prepared through rigorous simulations. Using simulations and data-driven modeling has several advantages over physics-based models in this case, as it, e.g., enables us to provide a dataset with varying vehicle kinematic parameters, traffic density, network load, vehicle automation controller parameters, and more. To further establish the conditions for inter-vehicle collisions, we analyze the predictions made through interpretable ML models and rank the features that contribute to collisions. We also extract human-interpretable rules that can establish the conditions leading to collisions between AVs during emergency braking. Finally, we plot the decision boundaries between different input features to separate the collision and non-collision classes and demonstrate the safe region of emergency braking.
  •  
3.
  • Ahmed, Mobyen Uddin, Dr, 1976-, et al. (författare)
  • When a CBR in Hand is Better than Twins in the Bush
  • 2022
  • Ingår i: CEUR Workshop Proceedings, vol. 3389. - : CEUR-WS. ; , s. 141-152
  • Konferensbidrag (refereegranskat)abstract
    • AI methods referred to as interpretable are often discredited as inaccurate by supporters of the existence of a trade-off between interpretability and accuracy. In many problem contexts however this trade-off does not hold. This paper discusses a regression problem context to predict flight take-off delays where the most accurate data regression model was trained via the XGBoost implementation of gradient boosted decision trees. While building an XGB-CBR Twin and converting the XGBoost feature importance into global weights in the CBR model, the resultant CBR model alone provides the most accurate local prediction, maintains the global importance to provide a global explanation of the model, and offers the most interpretable representation for local explanations. This resultant CBR model becomes a benchmark of accuracy and interpretability for this problem context, and hence it is used to evaluate the two additive feature attribute methods SHAP and LIME to explain the XGBoost regression model. The results with respect to local accuracy and feature attribution lead to potentially valuable future work. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
  •  
4.
  • Hurter, C., et al. (författare)
  • Usage of more transparent and explainable conflict resolution algorithm : Air traffic controller feedback
  • 2022
  • Ingår i: Transportation Research Procedia. - : Elsevier B.V.. - 2352-1457 .- 2352-1465. ; 66:C, s. 270-278
  • Tidskriftsartikel (refereegranskat)abstract
    • Recently, Artificial intelligence (AI) algorithms have received increasable interest in various application domains including in Air Transportation Management (ATM). Different AI in particular Machine Learning (ML) algorithms are used to provide decision support in autonomous decision-making tasks in the ATM domain e.g., predicting air transportation traffic and optimizing traffic flows. However, most of the time these automated systems are not accepted or trusted by the intended users as the decisions provided by AI are often opaque, non-intuitive and not understandable by human operators. Safety is the major pillar to air traffic management, and no black box process can be inserted in a decision-making process when human life is involved. To address this challenge related to transparency of the automated system in the ATM domain, we investigated AI methods in predicting air transportation traffic conflict and optimizing traffic flows based on the domain of Explainable Artificial Intelligence (XAI). Here, AI models’ explainability in terms of understanding a decision i.e., post hoc interpretability and understanding how the model works i.e., transparency can be provided for air traffic controllers. In this paper, we report our research directions and our findings to support better decision making with AI algorithms with extended transparency.
  •  
5.
  • Islam, Mir Riyanul, Doctoral Student, 1991-, et al. (författare)
  • A Novel Mutual Information based Feature Set for Drivers’ Mental Workload Evaluation Using Machine Learning
  • 2020
  • Ingår i: Brain Sciences. - Switzerland : MDPI AG. - 2076-3425. ; 10:8, s. 1-23
  • Tidskriftsartikel (refereegranskat)abstract
    • Analysis of physiological signals, electroencephalography in more specific notion, is considered as a very promising technique to obtain objective measures for mental workload evaluation, however, it requires complex apparatus to record and thus with poor usability in monitoring in-vehicle drivers’mental workload. This study proposes amethodology of constructing a novel mutual information-based feature set from the fusion of electroencephalography and vehicular signals acquired through real driving experiment and deployed in evaluating drivers’ mental workload. Mutual information of electroencephalography and vehicular signals were used as the prime factor for the fusion of features. In order to assess the reliability of the developed feature set mental workload score prediction, classification and event classification tasks were performed using different machine learning models. Moreover, features extracted from electroencephalography were used to compare the performance. In the prediction of mental workload score, expert-defined scores were used as the target values. For classification tasks, true labels were set from contextual information of the experiment. An extensive evaluation of every prediction tasks was carried out using different validation methods. In predicting mental workload score from the proposed feature set lowest mean absolute error was 0.09 and for classifying mental workload highest accuracy was 94%. According to the outcome of the study, it can be stated that the novel mutual information based features developed through the proposed approach can be employed to classify and monitor in-vehicle drivers’ mental workload.
  •  
6.
  • Islam, Mir Riyanul, Doctoral Student, 1991-, et al. (författare)
  • Deep Learning for Automatic EEG Feature Extraction: An Application in Drivers' Mental Workload Classification
  • 2019
  • Ingår i: Communications in Computer and Information Science, Volume 1107. - Cham : Springer International Publishing. - 9783030324223 ; , s. 121-135
  • Konferensbidrag (refereegranskat)abstract
    • In the pursuit of reducing traffic accidents, drivers' mental workload (MWL) has been considered as one of the vital aspects. To measure MWL in different driving situations Electroencephalography (EEG) of the drivers has been studied intensely. However, in the literature, mostly, manual analytic methods are applied to extract and select features from the EEG signals to quantify drivers' MWL. Nevertheless, the amount of time and effort required to perform prevailing feature extraction techniques leverage the need for automated feature extraction techniques. This work investigates deep learning (DL) algorithm to extract and select features from the EEG signals during naturalistic driving situations. Here, to compare the DL based and traditional feature extraction techniques, a number of classifiers have been deployed. Results have shown that the highest value of area under the curve of the receiver operating characteristic (AUC-ROC) is 0.94, achieved using the features extracted by CNN-AE and support vector machine. Whereas, using the features extracted by the traditional method, the highest value of AUC-ROC is 0.78 with the multi-layer perceptron. Thus, the outcome of this study shows that the automatic feature extraction techniques based on CNN-AE can outperform the manual techniques in terms of classification accuracy.
  •  
7.
  • Islam, Mir Riyanul, Doctoral Student, 1991- (författare)
  • Explainable Artificial Intelligence for Enhancing Transparency in Decision Support Systems
  • 2024
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Artificial Intelligence (AI) is recognized as advanced technology that assist in decision-making processes with high accuracy and precision. However, many AI models are generally appraised as black boxes due to their reliance on complex inference mechanisms.  The intricacies of how and why these AI models reach a decision are often not comprehensible to human users, resulting in concerns about the acceptability of their decisions. Previous studies have shown that the lack of associated explanation in a human-understandable form makes the decisions unacceptable to end-users. Here, the research domain of Explainable AI (XAI) provides a wide range of methods with the common theme of investigating how AI models reach to a decision or explain it. These explanation methods aim to enhance transparency in Decision Support Systems (DSS), particularly crucial in safety-critical domains like Road Safety (RS) and Air Traffic Flow Management (ATFM). Despite ongoing developments, DSSs are still in the evolving phase for safety-critical applications. Improved transparency, facilitated by XAI, emerges as a key enabler for making these systems operationally viable in real-world applications, addressing acceptability and trust issues. Besides, certification authorities are less likely to approve the systems for general use following the current mandate of Right to Explanation from the European Commission and similar directives from organisations across the world. This urge to permeate the prevailing systems with explanations paves the way for research studies on XAI concentric to DSSs.To this end, this thesis work primarily developed explainable models for the application domains of RS and ATFM. Particularly, explainable models are developed for assessing drivers' in-vehicle mental workload and driving behaviour through classification and regression tasks. In addition, a novel method is proposed for generating a hybrid feature set from vehicular and electroencephalography (EEG) signals using mutual information (MI). The use of this feature set is successfully demonstrated to reduce the efforts required for complex computations of EEG feature extraction.  The concept of MI was further utilized in generating human-understandable explanations of mental workload classification. For the domain of ATFM, an explainable model for flight take-off time delay prediction from historical flight data is developed and presented in this thesis. The gained insights through the development and evaluation of the explainable applications for the two domains underscore the need for further research on the advancement of XAI methods.In this doctoral research, the explainable applications for the DSSs are developed with the additive feature attribution (AFA) methods, a class of XAI methods that are popular in current XAI research. Nevertheless, there are several sources from the literature that assert that feature attribution methods often yield inconsistent results that need plausible evaluation. However, the existing body of literature on evaluation techniques is still immature offering numerous suggested approaches without a standardized consensus on their optimal application in various scenarios. To address this issue, comprehensive evaluation criteria are also developed for AFA methods as the literature on XAI suggests. The proposed evaluation process considers the underlying characteristics of the data and utilizes the additive form of Case-based Reasoning, namely AddCBR. The AddCBR is proposed in this thesis and is demonstrated to complement the evaluation process as the baseline to compare the feature attributions produced by the AFA methods. Apart from generating an explanation with feature attribution, this thesis work also proposes the iXGB-interpretable XGBoost. iXGB generates decision rules and counterfactuals to support the output of an XGBoost model thus improving its interpretability. From the functional evaluation, iXGB demonstrates the potential to be used for interpreting arbitrary tree-ensemble methods.In essence, this doctoral thesis initially contributes to the development of ideally evaluated explainable models tailored for two distinct safety-critical domains. The aim is to augment transparency within the corresponding DSSs. Additionally, the thesis introduces novel methods for generating more comprehensible explanations in different forms, surpassing existing approaches. It also showcases a robust evaluation approach for XAI methods.
  •  
8.
  • Islam, Mir Riyanul, Doctoral Student, 1991-, et al. (författare)
  • Interpretable Machine Learning for Modelling and Explaining Car Drivers' Behaviour : An Exploratory Analysis on Heterogeneous Data
  • 2023
  • Konferensbidrag (refereegranskat)abstract
    • Understanding individual car drivers’ behavioural variations and heterogeneity is a significant aspect of developingcar simulator technologies, which are widely used in transport safety. This also characterizes the heterogeneity in drivers’ behaviour in terms of risk and hurry, using both real-time on-track and in-simulator driving performance features. Machine learning (ML) interpretability has become increasingly crucial for identifying accurate and relevant structural relationships between spatial events and factors that explain drivers’ behaviour while being classified and the explanations for them are evaluated. However, the high predictive power of ML algorithms ignore the characteristics of non-stationary domain relationships in spatiotemporal data (e.g., dependence, heterogeneity), which can lead to incorrect interpretations and poor management decisions. This study addresses this critical issue of ‘interpretability’ in ML-based modelling of structural relationships between the events and corresponding features of the car drivers’ behavioural variations. In this work, an exploratory experiment is described that contains simulator and real driving concurrently with a goal to enhance the simulator technologies. Here, initially, with heterogeneous data, several analytic techniques for simulator bias in drivers’ behaviour have been explored. Afterwards, five different ML classifier models were developed to classify risk and hurry in drivers’ behaviour in real and simulator driving. Furthermore, two different feature attribution-based explanation models were developed to explain the decision from the classifiers. According to the results and observation, among the classifiers, Gradient Boosted Decision Trees performed best with a classification accuracy of 98.62%. After quantitative evaluation, among the feature attribution methods, the explanation from Shapley Additive Explanations (SHAP) was found to be more accurate. The use of different metrics for evaluating explanation methods and their outcome lay the path toward further research in enhancing the feature attribution methods.
  •  
9.
  • Islam, Mir Riyanul, Doctoral Student, 1991-, et al. (författare)
  • Investigating Additive Feature Attribution for Regression
  • 2023
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Feature attribution is a class of explainable artificial intelligence (XAI) methods that produce the contributions of data features to a model's decision. There are multiple accounts stating that feature attribution methods produce inconsistent results and should always be evaluated. However, the existing body of literature on evaluation techniques is still immature with multiple proposed techniques and a lack of widely adopted methods, making it difficult to recognize the best approach for each circumstance. This article investigates an approach to creating synthetic data for regression that can be used to evaluate the results of feature attribution methods. From a real-world dataset, the proposed approach describes how to create synthetic data that preserves the patterns of the original data and enables comprehensive evaluation of XAI methods. This research also demonstrates how global and local feature attributions can be represented in the additive form of case-based reasoning as a benchmark method for evaluation. Finally, this work demonstrates the case where a method that includes a standardization step does not produce feature attributions of the same quality as one that does not use standardization in the context of a regression task.
  •  
10.
  • Islam, Mir Riyanul, Doctoral Student, 1991-, et al. (författare)
  • iXGB : improving the interpretability of XGBoost using decision rules and counterfactuals
  • 2024
  • Ingår i: Proceedings of the 16th International Conference on Agents and Artificial Intelligence - Volume 3: ICAART. - 9789897586804 ; , s. 1345-1353
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • Tree-ensemble models, such as Extreme Gradient Boosting (XGBoost), are renowned Machine Learning models which have higher prediction accuracy compared to traditional tree-based models. This higher accuracy, however, comes at the cost of reduced interpretability. Also, the decision path or prediction rule of XGBoost is not explicit like the tree-based models. This paper proposes the iXGB--interpretable XGBoost, an approach to improve the interpretability of XGBoost. iXGB approximates a set of rules from the internal structure of XGBoost and the characteristics of the data. In addition, iXGB generates a set of counterfactuals from the neighbourhood of the test instances to support the understanding of the end-users on their operational relevance. The performance of iXGB in generating rule sets is evaluated with experiments on real and benchmark datasets which demonstrated reasonable interpretability. The evaluation result also supports that the interpretability of XGBoost can be improved without using surrogate methods.
  •  
11.
  • Islam, Mir Riyanul, Doctoral Student, 1991-, et al. (författare)
  • Local and Global Interpretability Using Mutual Information in Explainable Artificial Intelligence
  • 2021
  • Ingår i: 2021 8TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING & MACHINE INTELLIGENCE (ISCMI 2021). - : IEEE. - 9781728186832 ; , s. 191-195
  • Konferensbidrag (refereegranskat)abstract
    • Numerous studies have exploited the potential of Artificial Intelligence (AI) and Machine Learning (ML) models to develop intelligent systems in diverse domains for complex tasks, such as analysing data, extracting features, prediction, recommendation etc. However, presently these systems embrace acceptability issues from the end-users. The models deployed at the back of the systems mostly analyse the correlations or dependencies between the input and output to uncover the important characteristics of the input features, but they lack explainability and interpretability that causing the acceptability issues of intelligent systems and raising the research domain of eXplainable Artificial Intelligence (XAI). In this study, to overcome these shortcomings, a hybrid XAI approach is developed to explain an AI/ML model's inference mechanism as well as the final outcome. The overall approach comprises of 1) a convolutional encoder that extracts deep features from the data and computes their relevancy with features extracted using domain knowledge, 2) a model for classifying data points using the features from autoencoder, and 3) a process of explaining the model's working procedure and decisions using mutual information to provide global and local interpretability. To demonstrate and validate the proposed approach, experimentation was performed using an electroencephalography dataset from road safety to classify drivers' in-vehicle mental workload. The outcome of the experiment was found to be promising that produced a Support Vector Machine classifier for mental workload with approximately 89% performance accuracy. Moreover, the proposed approach can also provide an explanation for the classifier model's behaviour and decisions with the combined illustration of Shapely values and mutual information.
  •  
12.
  • Jmoona, Waleed, et al. (författare)
  • Explaining the Unexplainable : Role of XAI for Flight Take-Off Time Delay Prediction
  • 2023
  • Ingår i: AIAI 2023. IFIP Advances in Information and Communication Technology, vol 676.. - : Springer Science and Business Media Deutschland GmbH. - 9783031341069 ; , s. 81-93
  • Konferensbidrag (refereegranskat)abstract
    • Flight Take-Off Time (TOT) delay prediction is essential to optimizing capacity-related tasks in Air Traffic Management (ATM) systems. Recently, the ATM domain has put afforded to predict TOT delays using machine learning (ML) algorithms, often seen as “black boxes”, therefore it is difficult for air traffic controllers (ATCOs) to understand how the algorithms have made this decision. Hence, the ATCOs are reluctant to trust the decisions or predictions provided by the algorithms. This research paper explores the use of explainable artificial intelligence (XAI) in explaining flight TOT delay to ATCOs predicted by ML-based predictive models. Here, three post hoc explanation methods are employed to explain the models’ predictions. Quantitative and user evaluations are conducted to assess the acceptability and usability of the XAI methods in explaining the predictions to ATCOs. The results show that the post hoc methods can successfully mimic the inference mechanism and explain the models’ individual predictions. The user evaluation reveals that user-centric explanation is more usable and preferred by ATCOs. These findings demonstrate the potential of XAI to improve the transparency and interpretability of ML models in the ATM domain.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-12 av 12

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy