SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:0957 4174 OR L773:1873 6793 "

Sökning: L773:0957 4174 OR L773:1873 6793

  • Resultat 1-50 av 112
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Afzal, Wasif, et al. (författare)
  • On the application of genetic programming for software engineering predictive modeling : A systematic review
  • 2011
  • Ingår i: Expert Systems with Applications. - : Pergamon-Elsevier Science Ltd. - 0957-4174 .- 1873-6793. ; 38:9, s. 11984-11997
  • Forskningsöversikt (refereegranskat)abstract
    • The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.
  •  
2.
  • Aler, Ricardo, et al. (författare)
  • Study of Hellinger Distance as a splitting metric for Random Forests in balanced and imbalanced classification datasets
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 149
  • Tidskriftsartikel (refereegranskat)abstract
    • Hellinger Distance (HD) is a splitting metric that has been shown to have an excellent performance for imbalanced classification problems for methods based on Bagging of trees, while also showing good performance for balanced problems. Given that Random Forests (RF) use Bagging as one of two fundamental techniques to create diversity in the ensemble, it could be expected that HD is also effective for this ensemble method. The main aim of this article is to carry out an extensive investigation on important aspects about the use of HD in RF, including handling of multi-class problems, hyper-parameter optimization, metrics comparison, probability estimation, and metrics combination. In particular, HD is compared to other commonly used splitting metrics (Gini and Gain Ratio) in several contexts: balanced/imbalanced and two-class/multi-class. Two aspects related to classification problems are assessed: classification itself and probability estimation. HD is defined for two-class problems, but there are several ways in which it can be extended to deal with multi-class and this article studies the performance of the available options. Finally, even though HD can be used as an alternative to other splitting metrics, there is no reason to limit RF to use just one of them. Therefore, the final study of this article is to determine whether selecting the splitting metric using cross-validation on the training data can improve results further. Results show HD to be a robust measure for RF, with some weakness for balanced multi-class datasets (especially for probability estimation). Combination of metrics is able to result in a more robust performance. However, experiments of HD with text datasets show Gini to be more suitable than HD for this kind of problems.
  •  
3.
  • Altarabichi, Mohammed Ghaith, 1981-, et al. (författare)
  • Fast Genetic Algorithm for feature selection — A qualitative approximation approach
  • 2023
  • Ingår i: Expert systems with applications. - Oxford : Elsevier. - 0957-4174 .- 1873-6793. ; 211
  • Tidskriftsartikel (refereegranskat)abstract
    • Evolutionary Algorithms (EAs) are often challenging to apply in real-world settings since evolutionary computations involve a large number of evaluations of a typically expensive fitness function. For example, an evaluation could involve training a new machine learning model. An approximation (also known as meta-model or a surrogate) of the true function can be used in such applications to alleviate the computation cost. In this paper, we propose a two-stage surrogate-assisted evolutionary approach to address the computational issues arising from using Genetic Algorithm (GA) for feature selection in a wrapper setting for large datasets. We define “Approximation Usefulness” to capture the necessary conditions to ensure correctness of the EA computations when an approximation is used. Based on this definition, we propose a procedure to construct a lightweight qualitative meta-model by the active selection of data instances. We then use a meta-model to carry out the feature selection task. We apply this procedure to the GA-based algorithm CHC (Cross generational elitist selection, Heterogeneous recombination and Cataclysmic mutation) to create a Qualitative approXimations variant, CHCQX. We show that CHCQX converges faster to feature subset solutions of significantly higher accuracy (as compared to CHC), particularly for large datasets with over 100K instances. We also demonstrate the applicability of the thinking behind our approach more broadly to Swarm Intelligence (SI), another branch of the Evolutionary Computation (EC) paradigm with results of PSOQX, a qualitative approximation adaptation of the Particle Swarm Optimization (PSO) method. A GitHub repository with the complete implementation is available. © 2022 The Author(s)
  •  
4.
  • Argyrou, Argyris, et al. (författare)
  • A semi-supervised tool for clustering accounting databases with applications to internal controls
  • 2011
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 38:9, s. 11176-11181
  • Tidskriftsartikel (refereegranskat)abstract
    • A considerable body of literature attests to the significance of internal controls; however, little is known on how the clustering of accounting databases can function as an internal control procedure. To explore this issue further, this paper puts forward a semi-supervised tool that is based on self-organizing map and the IASB XBRL Taxonomy. The paper validates the proposed tool via a series of experiments on an accounting database provided by a shipping company. Empirical results suggest the tool can cluster accounting databases in homogeneous and well-separated clusters that can be interpreted within an accounting context. Further investigations reveal that the tool can compress a large number of similar transactions, and also provide information comparable to that of financial statements. The findings demonstrate that the tool can be applied to verify the processing of accounting transactions as well as to assess the accuracy of financial statements, and thus supplement internal controls.
  •  
5.
  • Bacauskiene, Marija, et al. (författare)
  • Random forests based monitoring of human larynx using questionnaire data
  • 2012
  • Ingår i: Expert systems with applications. - Amsterdam : Elsevier. - 0957-4174 .- 1873-6793. ; 39:5, s. 5506-5512
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is concerned with soft computing techniques-based noninvasive monitoring of human larynx using subject’s questionnaire data. By applying random forests (RF), questionnaire data are categorized into a healthy class and several classes of disorders including: cancerous, noncancerous, diffuse, nodular, paralysis, and an overall pathological class. The most important questionnaire statements are determined using RF variable importance evaluations. To explore data represented by variables used by RF, the t-distributed stochastic neighbor embedding (t-SNE) and the multidimensional scaling (MDS) are applied to the RF data proximity matrix. When testing the developed tools on a set of data collected from 109 subjects, the 100% classification accuracy was obtained on unseen data in binary classification into the healthy and pathological classes. The accuracy of 80.7% was achieved when classifying the data into the healthy, cancerous, noncancerous classes. The t-SNE and MDS mapping techniques applied allow obtaining two-dimensional maps of data and facilitate data exploration aimed at identifying subjects belonging to a “risk group”. It is expected that the developed tools will be of great help in preventive health care in laryngology.
  •  
6.
  • Bagloee, S. A., et al. (författare)
  • A hybrid machine-learning and optimization method for contraflow design in post-disaster cases and traffic management scenarios
  • 2019
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 124, s. 67-81
  • Tidskriftsartikel (refereegranskat)abstract
    • The growing number of man-made and natural disasters in recent years has made the disaster management a focal point of interest and research. To assist and streamline emergency evacuation, changing the directions of the roads (called contraflow, a traffic control measure) is proven to be an effective, quick and affordable scheme in the action list of the disaster management. The contraflow is computationally a challenging problem (known as NP-hard), hence developing an efficient method applicable to real-world and large-sized cases is a significant challenge in the literature. To cope with its complexities and to tailor to practical applications, a hybrid heuristic method based on a machine-learning model and bilevel optimization is developed. The idea is to try and test several contraflow scenarios providing a training dataset for a supervised learning (regression) model which is then used in an optimization framework to find a better scenario in an iterative process. This method is coded as a single computer program synchronized with GAMS (for optimization), MATLAB (for machine learning), EMME3 (for traffic simulation), MS-Access (for data storage) and MS-Excel (as an interface), and it is tested using a real dataset from Winnipeg, and Sioux-Falls as benchmarks. The algorithm managed to find globally optimal solutions for the Sioux-Falls example and improved accessibility to the dense and congested central areas of Winnipeg just by changing the direction of some roads.
  •  
7.
  • Bahnsen, Alejandro Correa, et al. (författare)
  • Example-dependent cost-sensitive decision trees
  • 2015
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 42:19, s. 6609-6619
  • Tidskriftsartikel (refereegranskat)abstract
    • Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 
  •  
8.
  • Bahnsen, Alejandro Correa, et al. (författare)
  • Feature engineering strategies for credit card fraud detection
  • 2016
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 51, s. 134-142
  • Tidskriftsartikel (refereegranskat)abstract
    • Every year billions of Euros are lost worldwide due to credit card fraud. Thus, forcing financial institutions to continuously improve their fraud detection systems. In recent years, several studies have proposed the use of machine learning and data mining techniques to address this problem. However, most studies used some sort of misclassification measure to evaluate the different solutions, and do not take into account the actual financial costs associated with the fraud detection process. Moreover, when constructing a credit card fraud detection model, it is very important how to extract the right features from the transactional data. This is usually done by aggregating the transactions in order to observe the spending behavioral patterns of the customers. In this paper we expand the transaction aggregation strategy, and propose to create a new set of features based on analyzing the periodic behavior of the time of a transaction using the von Mises distribution. Then, using a real credit card fraud dataset provided by a large European card processing company, we compare state-of-the-art credit card fraud detection models, and evaluate how the different sets of features have an impact on the results. By including the proposed periodic features into the methods, the results show an average increase in savings of 13%. (C) 2016 Elsevier Ltd. All rights reserved.
  •  
9.
  • Bandaru, Sunith, et al. (författare)
  • Data mining methods for knowledge discovery in multi-objective optimization : Part A - Survey
  • 2017
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 70, s. 139-159
  • Forskningsöversikt (refereegranskat)abstract
    • Real-world optimization problems typically involve multiple objectives to be optimized simultaneously under multiple constraints and with respect to several variables. While multi-objective optimization itself can be a challenging task, equally difficult is the ability to make sense of the obtained solutions. In this two-part paper, we deal with data mining methods that can be applied to extract knowledge about multi-objective optimization problems from the solutions generated during optimization. This knowledge is expected to provide deeper insights about the problem to the decision maker, in addition to assisting the optimization process in future design iterations through an expert system. The current paper surveys several existing data mining methods and classifies them by methodology and type of knowledge discovered. Most of these methods come from the domain of exploratory data analysis and can be applied to any multivariate data. We specifically look at methods that can generate explicit knowledge in a machine-usable form. A framework for knowledge-driven optimization is proposed, which involves both online and offline elements of knowledge discovery. One of the conclusions of this survey is that while there are a number of data mining methods that can deal with data involving continuous variables, only a few ad hoc methods exist that can provide explicit knowledge when the variables involved are of a discrete nature. Part B of this paper proposes new techniques that can be used with such datasets and applies them to discrete variable multi-objective problems related to production systems. 
  •  
10.
  • Bandaru, Sunith, et al. (författare)
  • Data mining methods for knowledge discovery in multi-objective optimization : Part B - New developments and applications
  • 2017
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 70, s. 119-138
  • Tidskriftsartikel (refereegranskat)abstract
    • The first part of this paper served as a comprehensive survey of data mining methods that have been used to extract knowledge from solutions generated during multi-objective optimization. The current paper addresses three major shortcomings of existing methods, namely, lack of interactiveness in the objective space, inability to handle discrete variables and inability to generate explicit knowledge. Four data mining methods are developed that can discover knowledge in the decision space and visualize it in the objective space. These methods are (i) sequential pattern mining, (ii) clustering-based classification trees, (iii) hybrid learning, and (iv) flexible pattern mining. Each method uses a unique learning strategy to generate explicit knowledge in the form of patterns, decision rules and unsupervised rules. The methods are also capable of taking the decision maker's preferences into account to generate knowledge unique to preferred regions of the objective space. Three realistic production systems involving different types of discrete variables are chosen as application studies. A multi-objective optimization problem is formulated for each system and solved using NSGA-II to generate the optimization datasets. Next, all four methods are applied to each dataset. In each application, the methods discover similar knowledge for specified regions of the objective space. Overall, the unsupervised rules generated by flexible pattern mining are found to be the most consistent, whereas the supervised rules from classification trees are the most sensitive to user-preferences. 
  •  
11.
  • Barrow, Devon, et al. (författare)
  • Automatic robust estimation for exponential smoothing : Perspectives from statistics and machine learning
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 160
  • Tidskriftsartikel (refereegranskat)abstract
    • A major challenge in automating the production of a large number of forecasts, as often required in many business applications, is the need for robust and reliable predictions. Increased noise, outliers and structural changes in the series, all too common in practice, can severely affect the quality of forecasting. We investigate ways to increase the reliability of exponential smoothing forecasts, the most widely used family of forecasting models in business forecasting. We consider two alternative sets of approaches, one stemming from statistics and one from machine learning. To this end, we adapt M-estimators, boosting and inverse boosting to parameter estimation for exponential smoothing.  We propose appropriate modifications that are necessary for time series forecasting while aiming to obtain scalable algorithms. We evaluate the various estimation methods using multiple real datasets and find that several approaches outperform the widely used maximum likelihood estimation. The novelty of this work lies in (1) demonstrating the usefulness of M-estimators, (2) and of inverse boosting, which outperforms standard boosting approaches, and (3) a comparative look at statistics versus machine learning inspired approaches.
  •  
12.
  • Barua, Shaibal, et al. (författare)
  • Automatic driver sleepiness detection using EEG, EOG and contextual information
  • 2019
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 115, s. 121-135
  • Tidskriftsartikel (refereegranskat)abstract
    • The many vehicle crashes that are caused by driver sleepiness each year advocates the development of automated driver sleepiness detection (ADSD) systems. This study proposes an automatic sleepiness classification scheme designed using data from 30 drivers who repeatedly drove in a high-fidelity driving simulator, both in alert and in sleep deprived conditions. Driver sleepiness classification was performed using four separate classifiers: k-nearest neighbours, support vector machines, case-based reasoning, and random forest, where physiological signals and contextual information were used as sleepiness indicators. The subjective Karolinska sleepiness scale (KSS) was used as target value. An extensive evaluation on multiclass and binary classifications was carried out using 10-fold cross-validation and leave-one-out validation. With 10-fold cross-validation, the support vector machine showed better performance than the other classifiers (79% accuracy for multiclass and 93% accuracy for binary classification). The effect of individual differences was also investigated, showing a 10% increase in accuracy when data from the individual being evaluated was included in the training dataset. Overall, the support vector machine was found to be the most stable classifier. The effect of adding contextual information to the physiological features improved the classification accuracy by 4% in multiclass classification and by and 5% in binary classification.
  •  
13.
  • Begum, Shahina, et al. (författare)
  • Classification of physiological signals for wheel loader operators using Multi-scale Entropy analysis and case-based reasoning
  • 2014
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 41:2, s. 295-305
  • Tidskriftsartikel (refereegranskat)abstract
    • Sensor signal fusion is becoming increasingly important in many areas including medical diagnosis and classification. Today, clinicians/experts often do the diagnosis of stress, sleepiness and tiredness on the basis of information collected from several physiological sensor signals. Since there are large individual variations when analyzing the sensor measurements and systems with single sensor, they could easily be vulnerable to uncertain noises/interferences in such domain; multiple sensors could provide more robust and reliable decision. Therefore, this paper presents a classification approach i.e. Multivariate Multiscale Entropy Analysis-Case-Based Reasoning (MMSE-CBR) that classifies physiological parameters of wheel loader operators by combining CBR approach with a data level fusion method named Multivariate Multiscale Entropy (MMSE). The MMSE algorithm supports complexity analysis of multivariate biological recordings by aggregating several sensor measurements e.g., Inter-beat-Interval (IBI) and Heart Rate (HR) from Electrocardiogram (ECG), Finger Temperature (FT), Skin Conductance (SC) and Respiration Rate (RR). Here, MMSE has been applied to extract features to formulate a case by fusing a number of physiological signals and the CBR approach is applied to classify the cases by retrieving most similar cases from the case library. Finally, the proposed approach i.e. MMSE-CBR has been evaluated with the data from professional drivers at Volvo Construction Equipment, Sweden. The results demonstrate that the proposed system that fuses information at data level could classify 'stressed' and 'healthy' subjects 83.33% correctly compare to an expert's classification. Furthermore, with another data set the achieved accuracy (83.3%) indicates that it could also classify two different conditions 'adapt' (training) and 'sharp' (real-life driving) for the wheel loader operators. Thus, the new approach of MMSE-CBR could support in classification of operators and may be of interest to researchers developing systems based on information collected from different sensor sources.
  •  
14.
  • Beikmohammadi, Ali, 1995-, et al. (författare)
  • SWP-LeafNET : A novel multistage approach for plant leaf identification based on deep CNN
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 202
  • Tidskriftsartikel (refereegranskat)abstract
    • Modern scientific and technological advances allow botanists to use computer vision-based approaches for plant identification tasks. These approaches have their own challenges. Leaf classification is a computer-vision task performed for the automated identification of plant species, a serious challenge due to variations in leaf morphology, including its size, texture, shape, and venation. Researchers have recently become more inclined toward deep learning-based methods rather than conventional feature-based methods due to the popularity and successful implementation of deep learning methods in image analysis, object recognition, and speech recognition.In this paper, to have an interpretable and reliable system, a botanist’s behavior is modeled in leaf identification by proposing a highly-efficient method of maximum behavioral resemblance developed through three deep learning-based models. Different layers of the three models are visualized to ensure that the botanist’s behavior is modeled accurately. The first and second models are designed from scratch. Regarding the third model, the pre-trained architecture MobileNetV2 is employed along with the transfer-learning technique. The proposed method is evaluated on two well-known datasets: Flavia and MalayaKew. According to a comparative analysis, the suggested approach is more accurate than hand-crafted feature extraction methods and other deep learning techniques in terms of 99.67% and 99.81% accuracy. Unlike conventional techniques that have their own specific complexities and depend on datasets, the proposed method requires no hand-crafted feature extraction. Also, it increases accuracy as compared with other deep learning techniques. Moreover, SWP-LeafNET is distributable and considerably faster than other methods because of using shallower models with fewer parameters asynchronously.
  •  
15.
  • Borg, Anton, et al. (författare)
  • Detecting serial residential burglaries using clustering
  • 2014
  • Ingår i: Expert Systems with Applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 41:11, s. 5252-5266
  • Tidskriftsartikel (refereegranskat)abstract
    • According to the Swedish National Council for Crime Prevention, law enforcement agencies solved approximately three to five percent of the reported residential burglaries in 2012. Internationally, studies suggest that a large proportion of crimes are committed by a minority of offenders. Law enforcement agencies, consequently, are required to detect series of crimes, or linked crimes. Comparison of crime reports today is difficult as no systematic or structured way of reporting crimes exists, and no ability to search multiple crime reports exist. This study presents a systematic data collection method for residential burglaries. A decision support system for comparing and analysing residential burglaries is also presented. The decision support system consists of an advanced search tool and a plugin-based analytical framework. In order to find similar crimes, law enforcement officers have to review a large amount of crimes. The potential use of the cut-clustering algorithm to group crimes to reduce the amount of crimes to review for residential burglary analysis based on characteristics is investigated. The characteristics used are modus operandi, residential characteristics, stolen goods, spatial similarity, or temporal similarity. Clustering quality is measured using the modularity index and accuracy is measured using the rand index. The clustering solution with the best quality performance score were residential characteristics, spatial proximity, and modus operandi, suggesting that the choice of which characteristic to use when grouping crimes can positively affect the end result. The results suggest that a high quality clustering solution performs significantly better than a random guesser. In terms of practical significance, the presented clustering approach is capable of reduce the amounts of cases to review while keeping most connected cases. While the approach might miss some connections, it is also capable of suggesting new connections. The results also suggest that while crime series clustering is feasible, further investigation is needed.
  •  
16.
  • Borg, Anton, et al. (författare)
  • Using VADER sentiment and SVM for predicting customer response sentiment
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 162
  • Tidskriftsartikel (refereegranskat)abstract
    • Customer support is important to corporate operations, which involves dealing with disgruntled customer and content customers that can have different requirements. As such, it is important to quickly extract the sentiment of support errands. In this study we investigate sentiment analysis in customer support for a large Swedish Telecom corporation. The data set consists of 168,010 e-mails divided into 69,900 conversation threads without any sentiment information available. Therefore, VADER sentiment is used together with a Swedish sentiment lexicon in order to provide initial labeling of the e-mails. The e-mail content and sentiment labels are then used to train two Support Vector Machine models in extracting/classifying the sentiment of e-mails. Further, the ability to predict sentiment of not-yet-seen e-mail responses is investigated. Experimental results show that the LinearSVM model was able to extract sentiment with a mean F1-score of 0.834 and mean AUC of 0.896. Moreover, the LinearSVM algorithm was also able to predict the sentiment of an e-mail one step ahead in the thread (based on the text in the an already sent e-mail) with a mean F1-score of 0.688 and the mean AUC of 0.805. The results indicate a predictable pattern in e-mail conversation that enables predicting the sentiment of a not-yet-seen e-mail. This can be used e.g. to prepare particular actions for customers that are likely to have a negative response. It can also provide feedback on possible sentiment reactions to customer support e-mails. © 2020 Elsevier Ltd
  •  
17.
  • Calikus, Ece, 1990-, et al. (författare)
  • No free lunch but a cheaper supper : A general framework for streaming anomaly detection
  • 2020
  • Ingår i: Expert systems with applications. - Oxford : Elsevier. - 0957-4174 .- 1873-6793. ; 155
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, there has been increased research interest in detecting anomalies in temporal streaming data. A variety of algorithms have been developed in the data mining community, which can be divided into two categories (i.e., general and ad hoc). In most cases, general approaches assume the one-size-fits-all solution model where a single anomaly detector can detect all anomalies in any domain.  To date, there exists no single general method that has been shown to outperform the others across different anomaly types, use cases and datasets. On the other hand, ad hoc approaches that are designed for a specific application lack flexibility. Adapting an existing algorithm is not straightforward if the specific constraints or requirements for the existing task change. In this paper, we propose SAFARI, a general framework formulated by abstracting and unifying the fundamental tasks in streaming anomaly detection, which provides a flexible and extensible anomaly detection procedure. SAFARI helps to facilitate more elaborate algorithm comparisons by allowing us to isolate the effects of shared and unique characteristics of different algorithms on detection performance. Using SAFARI, we have implemented various anomaly detectors and identified a research gap that motivates us to propose a novel learning strategy in this work. We conducted an extensive evaluation study of 20 detectors that are composed using SAFARI and compared their performances using real-world benchmark datasets with different properties. The results indicate that there is no single superior detector that works well for every case, proving our hypothesis that "there is no free lunch" in the streaming anomaly detection world. Finally, we discuss the benefits and drawbacks of each method in-depth and draw a set of conclusions to guide future users of SAFARI.
  •  
18.
  • De Masellis, Riccardo, et al. (författare)
  • Solving reachability problems on data-aware workflows
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 189
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent advances in the field of Business Process Management (BPM) have brought about several suites able to model data objects along with the traditional control flow perspective. Nonetheless, when it comes to formal verification there is still a lack of effective verification tools on imperative data-aware process models and executions: the data perspective is often abstracted away and verification tools are often missing.Automated Planning is one of the core areas of Artificial Intelligence where theoretical investigations and concrete and robust tools have made possible the reasoning about dynamic systems and domains. Moreover planning techniques are gaining popularity in the context of BPM. Starting from these observations, we provide here a concrete framework for formal verification of reachability properties on an expressive, yet empirically tractable class of data-aware process models, an extension of Workflow Nets. Then we provide a rigorous mapping between the semantics of such models and that of three important Automated Planning paradigms: Action Languages, Classical Planning, and Model-Checking. Finally, we perform a comprehensive assessment of the performance of three popular tools supporting the above paradigms in solving reachability problems for imperative data-aware business processes, which paves the way for a theoretically well founded and practically viable exploitation of planning-based techniques on data-aware business processes.
  •  
19.
  • de Morais, Gustavo A. Prudencio, et al. (författare)
  • Robust path-following control design of heavy vehicles based on multiobjective evolutionary optimization
  • 2022
  • Ingår i: Expert systems with applications. - : PERGAMON-ELSEVIER SCIENCE LTD. - 0957-4174 .- 1873-6793. ; 192
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to deal with systems parametric uncertainties is an essential issue for heavy self-driving vehicles in unconfined environments. In this sense, robust controllers prove to be efficient for autonomous navigation. However, uncertainty matrices for this class of systems are usually defined by algebraic methods which demand prior knowledge of the system dynamics. In this case, the control system designer depends on the quality of the uncertain model to obtain an optimal control performance. This work proposes a robust recursive controller designed via multiobjective optimization to overcome these shortcomings. Furthermore, a local search approach for multiobjective optimization problems is presented. The proposed method applies to any multiobjective evolutionary algorithm already established in the literature. The results presented show that this combination of model-based controller and machine learning improves the effectiveness of the system in terms of robustness, stability and smoothness.
  •  
20.
  • Deegalla, Sampath, et al. (författare)
  • Random subspace and random projection nearest neighbor ensembles for high dimensional data
  • 2022
  • Ingår i: Expert systems with applications. - 0957-4174 .- 1873-6793. ; 191
  • Tidskriftsartikel (refereegranskat)abstract
    • The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher’s discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.
  •  
21.
  • Deegalla, Sampath, et al. (författare)
  • Random subspace and random projection nearest neighbor ensembles for high dimensional data
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 191
  • Tidskriftsartikel (refereegranskat)abstract
    • The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher's discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.
  •  
22.
  • Demirbay, Baris, et al. (författare)
  • Multivariate regression (MVR) and different artificial neural network (ANN) models developed for optical transparency of conductive polymer nanocomposite films
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 207
  • Tidskriftsartikel (refereegranskat)abstract
    • The present study addresses a comparative performance assessment of multivariate regression (MVR) and well-optimized feed-forward, generalized regression and radial basis function neural network models which aimed to predict transmitted light intensity (I-tr) of carbon nanotube (CNT)-loaded polymer nanocomposite films by employing a large set of spectroscopic data collected from photon transmission measurements. To assess prediction performance of each developed model, universally accepted statistical error indices, regression, residual and Taylor diagram analyses were performed. As a novel performance evaluation criterion, 2D kernel density mapping was applied to predicted and experimental I-tr data to visually map out where the correlations are stronger and which data points can be more precisely estimated using the studied models. Employing MVR analysis, empirical equation of I-tr was acquired as a function of only four input elements due to sparseness and repetitive nature of the remaining input variables. Relative importance of each input variable was calculated separately through implementing Garson's algorithm for the best ANN model and mass fraction of CNT nanofillers was found as the most significant input variable. Using interconnection weights and bias values obtained for feed-forward neural network (FFNN) model, a neural predictive formula was derived to model I-tr. in terms of all input variables. 2D kernel density maps computed for each ANN model have shown that correlations between measured data and ANN predicted values are stronger for a specific I-tr range between 0% and 18%. To measure the stability of the ANN models, as a final analysis, 5-fold cross-validation method was applied to whole measurement data and 5 different iterations were additionally performed on each ANN model for 5 different training and test data splits. Statistical results found from 5-fold cross-validation analysis have reaffirmed that FFNN model exhibited outperformed prediction ability over all other ANN models and all FFNN predicted It,. values agreed well with experimental I-tr data. Taken all computational results together, one can adapt our proposed FFNN model and neural predictive formula to predict I-tr of polymer nanocomposite films, which can be made from different polymers and nanofillers, by considering specific data range as presented in this study with statistical details.
  •  
23.
  • Djordjevic, Boban, et al. (författare)
  • An optimisation-based digital twin for automated operation of rail level crossings
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 239
  • Tidskriftsartikel (refereegranskat)abstract
    • Railway level crossings (LCs), as the intersection of road and rail transport, are the weak points in terms of safety, as they are used by different modes of transport. The safety level at LCs can therefore be affected by the behaviour of the users. However, the level of safety can also be affected by failures and errors in the operation of LC equipment. Apart from safety, errors and failures of the LC devices can lead to longer waiting times for road users. As the volume of traffic on rail and road increases, so does the risk that the level of safety will decrease. The increase in traffic volume via LC leads to higher traffic volume on the road and more frequent trains on the rail, which leads to longer waiting times for road users on the LCs. The longer waiting times can disrupt the traffic flow, especially during peak hours when the growing volume of traffic on road and rail increases road user dissatisfaction. Moreover, in the era of Industry 4.0 and Digital Rail, new digital and automated technologies are being introduced to improve rail performance and competitiveness. These technologies are aligned with the LCs and are intended to ensure the efficient operation of LC and the efficient use of LCs by conventional trains as well. To achieve this, a concept is needed that simultaneously monitors and visualises the operation of LC in real time, identifies potential faults and failures of the LC equipment, and updates and monitors the proper operation of LC based on the historical data and information of the operation of LC according to the road traffic volume and the characteristics of the rail traffic and trains. Therefore, in this study, a digital twin system (DT) for rail LC was initiated and built as a concept that can meet the above requirements for proper LC operation in real time. DT of LC includes all components of LC and communication between them to synchronise the operation of LC according to the real-time requirements. The DT system is able to optimise the operation time of LC by monitoring the operation of LC and collecting data to ensure efficient use of LC and reduce unnecessary waiting time for road users. In this paper, the operation time of LCs on Swedish and Taiwanese railways was compared using the developed level crossing optimisation model (OLC). Since the introduction of new signalling concepts requires an improvement of LC operating characteristics and their design, the operating strategies were modelled using the OLC model. The results of the work show that the optimal values of LC operation time are different for the case studies investigated. The replacement of track circuits as detection devices and the introduction of balises can also positively influence the operation time, as well as increasing the speed of trains via LCs. However, due to the formulation of the OLC model, the impact of a longer train length on the operation of LC is not recognised. The OLC model can be used to estimate the real-time operation time of LC under different traffic conditions as well as the impact of different changes and extensions of LC.
  •  
24.
  • Dong, Chenchen, et al. (författare)
  • A complex network-based response method for changes in customer requirements for design processes of complex mechanical products
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 199, s. 117124-117124
  • Tidskriftsartikel (refereegranskat)abstract
    • The soaring demand, inevitable changes, and substantial change costs associated with complex mechanical products (CMPs) have accelerated the need to reasonably and accurately respond to changes in customer requirements during the product design process. However, current related studies cannot provide a simple and intuitive decision reference for decision-makers (DMs) to respond to these changes. In this work, a complex network theory-based methodology is proposed. First, a complex network model of CMPs is constructed; this model is processed unidirectionally through analysis of the constraint relation and affiliation among parts. Second, all nodes in this network are divided into levels, and all feasible change propagation paths are selected by breadth-first search. Furthermore, to quantify the change losses of paths, a novel “change workload” is proposed, which is a comprehensive indicator, and a distinct decision reference. The “change workload” is composed of “network change rate,” “change magnification node rate,” and “change magnification rate,” whose weights are evaluated by The Entropy Method and Technique for Order Preference by Similarity to an Ideal Solution. Due to the independence of the “change workload” from expert experience, the proposed methodology is reasonable and capable of outputting a list of affected parts and a preferred ordering of propagation paths, which could provide clearer and more direct guidance for DMs. This presented method is fully proven through a real-world case study of a wind turbine. 
  •  
25.
  • Ebrahimi, Zahra, et al. (författare)
  • A Review on Deep Learning Methods for ECG Arrhythmia Classification
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793 .- 2590-1885. ; 7
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep Learning (DL) has recently become a topic of study in different applications including healthcare, in which timely detection of anomalies on Electrocardiogram (ECG) can play a vital role in patient monitoring. This paper presents a comprehensive review study on the recent DL methods applied to the ECG signal for the classification purposes. This study considers various types of the DL methods such as Convolutional Neural Network (CNN), Deep Belief Network (DBN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). From the 75 studies reported within 2017 and 2018, CNN is dominantly observed as the suitable technique for feature extraction, seen in 52% of the studies. DL methods showed high accuracy in correct classification of Atrial Fibrillation (AF) (100%), Supraventricular Ectopic Beats (SVEB) (99.8%), and Ventricular Ectopic Beats (VEB) (99.7%) using the GRU/LSTM, CNN, and LSTM, respectively
  •  
26.
  • Eivazi, Hamidreza, et al. (författare)
  • Towards extraction of orthogonal and parsimonious non-linear modes from turbulent flows
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 202, s. 117038-
  • Tidskriftsartikel (refereegranskat)abstract
    • Modal-decomposition techniques are computational frameworks based on data aimed at identifying a low-dimensional space for capturing dominant flow features: the so-called modes. We propose a deep probabilistic-neural-network architecture for learning a minimal and near-orthogonal set of non-linear modes from high-fidelity turbulent-flow data useful for flow analysis, reduced-order modeling and flow control. Our approach is based on beta-variational autoencoders (beta-VAEs) and convolutional neural networks (CNNs), which enable extracting non-linear modes from multi-scale turbulent flows while encouraging the learning of independent latent variables and penalizing the size of the latent vector. Moreover, we introduce an algorithm for ordering VAE-based modes with respect to their contribution to the reconstruction. We apply this method for non-linear mode decomposition of the turbulent flow through a simplified urban environment, where the flow-field data is obtained based on well-resolved large-eddy simulations (LESs). We demonstrate that by constraining the shape of the latent space, it is possible to motivate the orthogonality and extract a set of parsimonious modes sufficient for high-quality reconstruction. Our results show the excellent performance of the method in the reconstruction against linear-theory-based decompositions, where the energy percentage captured by the proposed method from five modes is equal to 87.36% against 32.41% of the POD. Moreover, we compare our method with available AE-based models. We show the ability of our approach in the extraction of near-orthogonal modes with the determinant of the correlation matrix equal to 0.99, which may lead to interpretability.
  •  
27.
  • Ejnarsson, Marcus, et al. (författare)
  • Multi-resolution screening of paper formation variations on production line
  • 2009
  • Ingår i: Expert systems with applications. - Amsterdam : Elsevier. - 0957-4174 .- 1873-6793. ; 36:2, part 2, s. 3144-3152
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is concerned with a technique for detecting and monitoring abnormal paper formation variations in machine direction online in various frequency regions. A paper web is illuminated by two red diode lasers and the reflected light recorded as two time series of high resolution measurements constitute the input signal to the papermaking process monitoring system. The time series are divided into blocks and each block is analyzed separately. The task is treated as kernel based novelty detection applied to a multi-resolution time series representation obtained from the band-pass filtering of the Fourier power spectrum of the time series block. The frequency content of each frequency region is characterized by a feature vector, which is transformed using the kernel canonical correlation analysis and then categorized into the inlier or outlier class by the novelty detector. The ratio of outlying data points, significantly exceeding the predetermined value, indicates abnormalities in the paper formation. The experimental investigations performed have shown good repetitiveness and stability of the paper formation abnormalities detection results. The tools developed are used for online paper formation monitoring in a paper mill.
  •  
28.
  • Englund, Cristofer, et al. (författare)
  • A novel approach to estimate proximity in a random forest : An exploratory study
  • 2012
  • Ingår i: Expert systems with applications. - Amsterdam : Elsevier BV. - 0957-4174 .- 1873-6793. ; 39:17, s. 13046-13050
  • Tidskriftsartikel (refereegranskat)abstract
    • A data proximity matrix is an important information source in random forests (RF) based data mining, including data clustering, visualization, outlier detection, substitution of missing values, and finding mislabeled data samples. A novel approach to estimate proximity is proposed in this work. The approach is based on measuring distance between two terminal nodes in a decision tree. To assess the consistency (quality) of data proximity estimate, we suggest using the proximity matrix as a kernel matrix in a support vector machine (SVM), under the assumption that a matrix of higher quality leads to higher classification accuracy. It is experimentally shown that the proposed approach improves the proximity estimate, especially when RF is made of a small number of trees. It is also demonstrated that, for some tasks, an SVM exploiting the suggested proximity matrix based kernel, outperforms an SVM based on a standard radial basis function kernel and the standard proximity matrix based kernel.
  •  
29.
  • Englund, Cristofer, et al. (författare)
  • The application of data mining techniques to model visual distraction of bicyclists
  • 2016
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 52, s. 99-107
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a novel approach to modelling visual distraction of bicyclists. A unique bicycle simulator equipped with sensors capable of capturing the behaviour of the bicyclist is presented. While cycling two similar scenario routes, once while simultaneously interacting with an electronic device and once without any electronic device, statistics of the measured speed, head movements, steering angle and bicycle road position along with questionnaire data are captured. These variables are used to model the self-assessed distraction level of the bicyclist. Data mining techniques based on random forests, support vector machines and neural networks are evaluated for the modelling task. Out of the total 71 measured variables a variable selection procedure based on random forests is able to select a fraction of those and consequently improving the modelling performance. By combining the random forest-based variable selection and support vector machine-based modelling technique the best overall performance is achieved. The method shows that with a few observable variables it is possible to use machine learning to model, and thus predict, the distraction level of a bicyclist.
  •  
30.
  • Esmaeili, Leila, et al. (författare)
  • An efficient method to minimize cross-entropy for selecting multi-level threshold values using an improved human mental search algorithm
  • 2021
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 182
  • Tidskriftsartikel (refereegranskat)abstract
    • The minimum cross-entropy (MCIT) is introduced as a multi-level image thresholding approach, but it suffers from time complexity, in particular, when the number of thresholds is high. To address this issue, this paper proposes a novel MCIT-based image thresholding based on improved human mental search (HMS) algorithm, a recently proposed population-based metaheuristic algorithm to tackle complex optimisation problems. To further enhance the efficacy, we improve HMS algorithm, IHMSMLIT, with four improvements, including, adaptively selection of the number of mental searches instead of randomly selection, proposing one-step k-means clustering for region clustering, updating based on global and personal experiences, and proposing a random clustering strategy. To assess our proposed algorithm, we conduct an extensive set of experiments with several state-of-the-art and the most recent approaches on a benchmark set of images and in terms of several criteria including objective function, peak signal to noise ratio (PSNR), feature similarity index (FSIM), structural similarity index (SSIM), and stability analysis. The obtained results apparently demonstrate the competitive performance of our proposed algorithm.
  •  
31.
  • Farouq, Shiraz, 1980-, et al. (författare)
  • A conformal anomaly detection based industrial fleet monitoring framework : A case study in district heating
  • 2022
  • Ingår i: Expert systems with applications. - Oxford : Elsevier. - 0957-4174 .- 1873-6793. ; 201
  • Tidskriftsartikel (refereegranskat)abstract
    • The monitoring infrastructure of an industrial fleet can rely on the so-called unit-level and subfleet-level models to observe the behavior of a target unit. However, such infrastructure has to confront several challenges. First, from an anomaly detection perspective of monitoring a target unit, unit-level and subfleet-level models can give different information about the nature of an anomaly, and which approach or level model is appropriate is not always clear. Second, in the absence of well-understood prior models of unit and subfleet behavior, the choice of a base model at their respective levels, especially in an online/streaming setting, may not be clear. Third, managing false alarms is a major problem. To deal with these challenges, we proposed to rely on the conformal anomaly detection framework. In addition, an ensemble approach was deployed to mitigate the knowledge gap in understanding the underlying data-generating process at the unit and subfleet levels. Therefore, to monitor the behavior of a target unit, a unit-level ensemble model (ULEM) and a subfleet-level ensemble model (SLEM) were constructed, where each member of the respective ensemble is based on a conformal anomaly detector (CAD). However, since the information obtained by these two ensemble models through their p-values may not always agree, a combined ensemble model (CEM) was proposed. The results are based on real-world operational data obtained from district heating (DH) substations. Here, it was observed that CEM reduces the overall false alarms compared to ULEM or SLEM, albeit at the cost of some detection delay. The analysis demonstrated the advantages and limitations of ULEM, SLEM, and CEM. Furthermore, discords obtained from the state-of-the-art matrix-profile (MP) method and the combined calibration scores obtained from ULEM and SLEM were compared in an offline setting. Here, it was observed that SLEM achieved a better overall precision and detection delay. Finally, the different components related to ULEM, SLEM, and CEM were put together into what we refer to as TRANTOR: a conformal anomaly detection based industrial fleet monitoring framework. The proposed framework is expected to enable fleet operators in various domains to improve their monitoring infrastructure by efficiently detecting anomalous behavior and controlling false alarms at the target units. © 2022
  •  
32.
  • Flyckt, Jonatan, et al. (författare)
  • Detecting ditches using supervised learning on high-resolution digital elevation models
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 201
  • Tidskriftsartikel (refereegranskat)abstract
    • Drained wetlands can constitute a large source of greenhouse gas emissions, but the drainage networks in these wetlands are largely unmapped, and better maps are needed to aid in forest production and to better understand the climate consequences. We develop a method for detecting ditches in high resolution digital elevation models derived from LiDAR scans. Thresholding methods using digital terrain indices can be used to detect ditches. However, a single threshold generally does not capture the variability in the landscape, and generates many false positives and negatives. We hypothesise that, by combining the digital terrain indices using supervised learning, we can improve ditch detection at a landscape-scale. In addition to digital terrain indices, additional features are generated by transforming the data to include neighbouring cells for better ditch predictions. A Random Forests classifier is used to locate the ditches, and its probability output is processed to remove noise, and binarised to produce the final ditch prediction. The confidence interval for the Cohen's Kappa index ranges [0.655, 0.781] between the evaluation plots with a confidence level of 95%. The study demonstrates that combining information from a suite of digital terrain indices using machine learning provides an effective technique for automatic ditch detection at a landscape-scale, aiding in both practical forest management and in combatting climate change. © 2022 The Authors
  •  
33.
  • Fries, Niklas, et al. (författare)
  • A comparison of local explanation methods for high-dimensional industrial data : a simulation study
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 207
  • Tidskriftsartikel (refereegranskat)abstract
    • Prediction methods can be augmented by local explanation methods (LEMs) to perform root cause analysis for individual observations. But while most recent research on LEMs focus on low-dimensional problems, real-world datasets commonly have hundreds or thousands of variables. Here, we investigate how LEMs perform for high-dimensional industrial applications. Seven prediction methods (penalized logistic regression, LASSO, gradient boosting, random forest and support vector machines) and three LEMs (TreeExplainer, Kernel SHAP, and the conditional normal sampling importance (CNSI)) were combined into twelve explanation approaches. These approaches were used to compute explanations for simulated data, and real-world industrial data with simulated responses. The approaches were ranked by how well they predicted the contributions according to the true models. For the simulation experiment, the generalized linear methods provided best explanations, while gradient boosting with either TreeExplainer or CNSI, or random forest with CNSI were robust for all relationships. For the real-world experiment, TreeExplainer performed similarly, while the explanations from CNSI were significantly worse. The generalized linear models were fastest, followed by TreeExplainer, while CNSI and Kernel SHAP required several orders of magnitude more computation time. In conclusion, local explanations can be computed for high-dimensional data, but the choice of statistical tools is crucial.
  •  
34.
  • Fries, Niklas, et al. (författare)
  • Data-driven process adjustment policies for quality improvement
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 237
  • Tidskriftsartikel (refereegranskat)abstract
    • Common objectives in machine learning research are to predict the output quality of manufacturing processes, to perform root cause analysis in case of reduced quality, and to propose intervention strategies. The cost of reduced quality must be weighed against the cost of the interventions, which depend on required downtime, personnel costs, and material costs. Furthermore, there is a risk of false negatives, i.e., failure to identify the true root causes, or false positives, i.e., adjustments that further reduce the quality. A policy for process adjustments describes when and where to perform interventions, and we say that a policy is worthwhile if it reduces the expected operational cost. In this paper, we describe a data-driven alarm and root cause analysis framework, that given a predictive and explanatory model trained on high-dimensional process and quality data, can be used to search for a worthwhile adjustment policy. The framework was evaluated on large-scale simulated process and quality data. We find that worthwhile adjustment policies can be derived also for problems with a large number of explanatory variables. Interestingly, the performance of the adjustment policies is almost exclusively driven by the quality of the model fits. Based on these results, we discuss key areas of future research, and how worthwhile adjustment policies can be implemented in real world applications.
  •  
35.
  • Frumosu, Flavia Dalia, et al. (författare)
  • Cost-sensitive learning classification strategy for predicting product failures
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 161:15
  • Tidskriftsartikel (refereegranskat)abstract
    • In the current era of Industry 4.0, sensor data used in connection with machine learning algorithms can help manufacturing industries to reduce costs and to predict failures in advance. This paper addresses a binary classification problem found in manufacturing engineering, which focuses on how to ensure product quality delivery and at the same time to reduce production costs. The aim behind this problem is to predict the number of faulty products, which in this case is extremely low. As a result of this characteristic, the problem is reduced to an imbalanced binary classification problem. The authors contribute to imbalanced classification research in three important ways. First, the industrial application coming from the electronic manufacturing industry is presented in detail, along with its data and modelling challenges. Second, a modified cost-sensitive classification strategy based on a combination of Voronoi diagrams and genetic algorithm is applied to tackle this problem and is compared to several base classifiers. The results obtained are promising for this specific application. Third, in order to evaluate the flexibility of the strategy, and to demonstrate its wide range of applicability, 25 real-world data sets are selected from the KEEL repository with different imbalance ratios and number of features. The strategy, in this case implemented without a predefined cost, is compared with the same base classifiers as those used for the industrial problem.
  •  
36.
  • Georgoulas, George, et al. (författare)
  • Principal component analysis of the start-up transient and hidden Markov modeling for broken rotor bar fault diagnosis in asynchronous machines
  • 2013
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 40:17, s. 7024-7033
  • Tidskriftsartikel (refereegranskat)abstract
    • This article presents a novel computational method for the diagnosis of broken rotor bars in three phase asynchronous machines. The proposed method is based on Principal Component Analysis (PCA) and is applied to the stator’s three phase start-up current. The fault detection is easier in the start-up transient because of the increased current in the rotor circuit, which amplifies the effects of the fault in the stator’s current independently of the motor’s load. In the proposed fault detection methodology, PCA is initially utilized to extract a characteristic component, which reflects the rotor asymmetry caused by the broken bars. This component can be subsequently processed using Hidden Markov Models (HMMs). Two schemes, a multiclass and a one-class approach are proposed. The efficiency of the novel proposed schemes is evaluated by multiple experimental test cases. The results obtained indicate that the suggested approaches based on the combination of PCA and HMM, can be successfully utilized not only for identifying the presence of a broken bar but also for estimating the severity (number of broken bars) of the fault.
  •  
37.
  • Gerdes, Mike (författare)
  • Decision trees and genetic algorithms for condition monitoring forecasting of aircraft air conditioning
  • 2013
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 40:12, s. 5021-5026
  • Tidskriftsartikel (refereegranskat)abstract
    • Unscheduled maintenance of aircraft can cause significant costs. The machine needs to be repaired before it can operate again. Thus it is desirable to have concepts and methods to prevent unscheduled maintenance. This paper proposes a method for forecasting the condition of aircraft air conditioning system based on observed past data. Forecasting is done in a point by point way, by iterating the algorithm. The proposed method uses decision trees to find and learn patterns in past data and use these patterns to select the best forecasting method to forecast future data points. Forecasting a data point is based on selecting the best applicable approximation method. The selection is done by calculating different features/attributes of the time series and then evaluating the decision tree. A genetic algorithm is used to find the best feature set for the given problem to increase the forecasting performance. The experiments show a good forecasting ability even when the function is disturbed by noise.
  •  
38.
  • Hilletofth, Per, et al. (författare)
  • Three novel fuzzy logic concepts applied to reshoring decision-making
  • 2019
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 126, s. 133-143
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper investigates the possibility of increasing the interpretability of fuzzy rules and reducing the complexity when designing fuzzy rules. To achieve this, three novel fuzzy logic concepts (i.e., relative linguistic labels, high-level rules and linguistic variable weights) were conceived and implemented in a fuzzy logic system for reshoring decision-making. The introduced concepts increase the interpretability of fuzzy rules and reduce the complexity when designing fuzzy rules while still providing accurate results.
  •  
39.
  • Hossain, Emam, et al. (författare)
  • Machine learning with Belief Rule-Based Expert Systems to predict stock price movements
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 206
  • Tidskriftsartikel (refereegranskat)abstract
    • Price prediction of financial assets has been a key interest for researchers over the decades. Numerous techniques to predict the price movements have been developed by the researchers over the years. But a model loses its credibility once a large number of traders start using the same technique. Therefore, the traders are in continuous search of new and efficient prediction techniques. In this research, we propose a novel machine learning technique using technical analysis with Belief Rule-Based Expert System (BRBES), and incorporating the concept of Bollinger Band to forecast stock price in the next five days. A Bollinger Event is triggered when the closing price of the stock goes down the Lower Bollinger Band. The BRBES approach has never been applied to stock markets, despite its potential and the appetite of the financial markets for expert systems. We predict the price movement of the Swedish company TELIA as a proof of concept. The knowledge base of the initial BRBES is constructed by simulating the historical data and then the learning parameters are optimized using MATLAB’s fmincon function. We evaluate the performance of the trained BRBES in terms of Accuracy, Area Under ROC Curve, Root Mean Squared Error, type I error, type II error,  value, and profit/loss ratio. We compare our proposed model against a similar rule-based technique, Adaptive Neuro-Fuzzy Inference System (ANFIS), to understand the significance of the improved rule base of BRBES. We also compare the performance against Support Vector Machine (SVM), one of the most popular machine learning techniques, and a simple heuristic model. Finally, the trained BRBES is compared against recent state-of-the-art deep learning approaches to show how competitive the performance of our proposed model is. The results show that the trained BRBES produces better performance than the non-trained BRBES, ANFIS, SVM, and the heuristic approaches. Also, it indicates better or competitive performance against the deep learning approaches. Thus BRBES exhibits its potential in predicting financial asset price movement.
  •  
40.
  • Hosseini, Ahmad, et al. (författare)
  • Connectivity reliability in uncertain networks with stability analysis
  • 2016
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 57, s. 337-344
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper treats the fundamental problems of reliability and stability analysis in uncertain networks. Here, we consider a collapsed, post-disaster, traffic network that is composed of nodes (centers) and arcs (links), where the uncertain operationality or reliability of links is evaluated by domain experts. To ensure the arrival of relief materials and rescue vehicles to the disaster areas in time, uncertainty theory, which neither requires any probability distribution nor fuzzy membership function, is employed to originally propose the problem of choosing the most reliable path (MRP). We then introduce the new problems of α-most reliable path (α-MRP), which aims to minimize the pessimistic risk value of a path under a given confidence level α, and very most reliable path (VMRP), where the objective is to maximize the confidence level of a path under a given threshold of pessimistic risk. Then, exploiting these concepts, we give the uncertainty distribution of the MRP in an uncertain traffic network. The objective of bothα-MRP and VMRP is to determine a path that comprises the least risky route for transportation from a designated source node to a designated sink node, but with different decision criteria. Furthermore, a methodology is proposed to tackle the stability analysis issue in the framework of uncertainty programming; specifically, we show how to compute the arcs’ tolerances. Finally, we provide illustrative examples that show how our approaches work in realistic situation.
  •  
41.
  • Hosseini, Ahmad (författare)
  • Max-type reliability in uncertain post-disaster networks through the lens of sensitivity and stability analysis
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 241
  • Tidskriftsartikel (refereegranskat)abstract
    • The functionality of infrastructures, particularly in densely populated areas, is greatly impacted by natural disasters, resulting in uncertain networks. Thus, it is important for crisis management professionals and computer-based systems for transportation networks (such as expert systems) to utilize trustworthy data and robust computational methodologies when addressing convoluted decision-making predicaments concerning the design of transportation networks and optimal routes. This study aims to evaluate the vulnerability of paths in post-disaster transportation networks, with the aim of facilitating rescue operations and ensuring the safe delivery of supplies to affected regions. To investigate the problem of links' tolerances in uncertain networks and the resiliency and reliability of paths, an uncertainty theory-based model that employs minmax optimization with a bottleneck objective function is used. The model addresses the uncertain maximum reliable paths problem, which takes into account uncertain risk variables associated with links. Rather than using conventional methods for calculating the deterministic tolerances of a single element in combinatorial optimization, this study introduces a generalization of stability analysis based on tolerances while the perturbations in a group of links are involved. The analysis defines set tolerances that specify the minimum and maximum values that a designated group of links could simultaneously fluctuate while maintaining the optimality of the max-type reliable paths. The study shows that set tolerances can be considered as well-defined and proposes computational methods to calculate or bound such quantities - which were previously unresearched and difficult to measure. The model and methods are demonstrated to be both theoretically and numerically efficient by applying them to four subnetworks from our case study. In conclusion, this study provides a comprehensive approach to addressing uncertainty in reliability problems in networks, with potential applications in various fields.
  •  
42.
  • Hosseini, S. Ahmad, et al. (författare)
  • A hybrid greedy randomized heuristic for designing uncertain transport network layout
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 190
  • Tidskriftsartikel (refereegranskat)abstract
    • The foundations of efficient management are laid on transport networks in various scientific and industrial fields. Nonetheless, establishing an optimum transport network design (TND) is complicated due to uncertainty in the operating environment. As a result, an uncertain network may be a more realistic representation of an actual transport network. The present study deals with an uncertain TND problem in which uncertain programming and the greedy randomized adaptive search procedure (GRASP) are used to develop an original optimization framework and propose a solution technique for obtaining cost-efficient designs. To this end, we originally develop the concept of alpha-shortest cycle (alpha-SC) employing the pessimistic value criterion, given a user-defined predesignated confidence level alpha. Employing this concept and the operational law of uncertain programming, a new auxiliary chance-constrained programming model is established for the uncertain TND problem, and we prove the existence of an equivalence relation between TNDs in an uncertain network and those in an auxiliary deterministic network. Specifically, we articulate how to obtain the uncertainty distribution of the overall optimal uncertain network's design cost. After all, the effectiveness and practical performance of the heuristic and optimization model is illustrated by adopting samples with different topology from a case study to show how our approach work in realistic networks and to highlight some of the heuristic's features.
  •  
43.
  • Ilić, Mihailo, et al. (författare)
  • Towards optimal learning : Investigating the impact of different model updating strategies in federated learning
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 249:Part A
  • Tidskriftsartikel (refereegranskat)abstract
    • With rising data security concerns, privacy preserving machine learning (ML) methods have become a key research topic. Federated learning (FL) is one such approach which has gained a lot of attention recently as it offers greater data security in ML tasks. Substantial research has already been done on different aggregation methods, personalized FL algorithms etc. However, insufficient work has been done to identify the effects different model update strategies (concurrent FL, incremental FL, etc.) have on federated model performance. This paper presents results of extensive FL simulations run on multiple datasets with different conditions in order to determine the efficiency of 4 different FL model update strategies: concurrent, semi -concurrent, incremental, and cyclic -incremental. We have found that incremental updating methods offer more reliable FL models in cases where data is distributed both evenly and unevenly between edge nodes, especially when the number of data samples across all edge nodes is small.
  •  
44.
  • Jardines, Aniel, et al. (författare)
  • Thunderstorm prediction during pre-tactical air-traffic-flow management using convolutional neural networks
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 241, s. 122466-
  • Tidskriftsartikel (refereegranskat)abstract
    • Thunderstorms can be a large source of disruption for European air-traffic management causing a chaotic state of operation within the airspace system. In current practice, air-traffic managers are provided with imprecise forecasts which limit their ability to plan strategically. As a result, weather mitigation is performed using tactical measures with a time horizon of three hours. Increasing the lead time of thunderstorm predictions to the day before operations could help air-traffic managers plan around weather and improve the efficiency of air-traffic-management operations. Emerging techniques based on machine learning have provided promising results, partly attributed to reduced human bias and improved capacity in predicting thunderstorms purely from numerical weather prediction data. In this paper, we expand on our previous work on thunderstorm forecasting, by applying convolutional neural networks (CNNs) to exploit the spatial characteristics embedded in the weather data. The learning task of predicting convection is formulated as a binary-classification problem based on satellite data. The performance of multiple CNN-based architectures, including a fully-convolutional neural network (FCN), a CNN-based encoder–decoder, a U-Net, and a pyramid-scene parsing network (PSPNet) are compared against a multi-layer-perceptron (MLP) network. Our work indicates that CNN-based architectures improve the performance of point-prediction models, with a fully-convolutional neural-network architecture having the best performance. Results show that CNN-based architectures can be used to increase the prediction lead time of thunderstorms. Lastly, a case study illustrating the applications of convection-prediction models in an air-traffic-management setting is presented.
  •  
45.
  • Johansson, Ulf, et al. (författare)
  • Interpretable regression trees using conformal prediction
  • 2018
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 97, s. 394-404
  • Tidskriftsartikel (refereegranskat)abstract
    • A key property of conformal predictors is that they are valid, i.e., their error rate on novel data is bounded by a preset level of confidence. For regression, this is achieved by turning the point predictions of the underlying model into prediction intervals. Thus, the most important performance metric for evaluating conformal regressors is not the error rate, but the size of the prediction intervals, where models generating smaller (more informative) intervals are said to be more efficient. State-of-the-art conformal regressors typically utilize two separate predictive models: the underlying model providing the center point of each prediction interval, and a normalization model used to scale each prediction interval according to the estimated level of difficulty for each test instance. When using a regression tree as the underlying model, this approach may cause test instances falling into a specific leaf to receive different prediction intervals. This clearly deteriorates the interpretability of a conformal regression tree compared to a standard regression tree, since the path from the root to a leaf can no longer be translated into a rule explaining all predictions in that leaf. In fact, the model cannot even be interpreted on its own, i.e., without reference to the corresponding normalization model. Current practice effectively presents two options for constructing conformal regression trees: to employ a (global) normalization model, and thereby sacrifice interpretability; or to avoid normalization, and thereby sacrifice both efficiency and individualized predictions. In this paper, two additional approaches are considered, both employing local normalization: the first approach estimates the difficulty by the standard deviation of the target values in each leaf, while the second approach employs Mondrian conformal prediction, which results in regression trees where each rule (path from root node to leaf node) is independently valid. An empirical evaluation shows that the first approach is as efficient as current state-of-the-art approaches, thus eliminating the efficiency vs. interpretability trade-off present in existing methods. Moreover, it is shown that if a validity guarantee is required for each single rule, as provided by the Mondrian approach, a penalty with respect to efficiency has to be paid, but it is only substantial at very high confidence levels.
  •  
46.
  • Kabir, Sami, PhD Student, et al. (författare)
  • An Integrated Approach of Belief Rule Base and Convolutional Neural Network to Monitor Air Quality in Shanghai
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 206
  • Tidskriftsartikel (refereegranskat)abstract
    • Accurate monitoring of air quality can reduce its adverse impact on earth. Ground-level sensors can provide fine particulate matter (PM2.5) concentrations and ground images. But, such sensors have limited spatial coverage and require deployment cost. PM2.5 can be estimated from satellite-retrieved Aerosol Optical Depth (AOD) too. However, AOD is subject to uncertainties associated with its retrieval algorithms and constrain the spatial resolution of estimated PM2.5. AOD is not retrievable under cloudy weather as well. In contrast, satellite images provide continuous spatial coverage with no separate deployment cost. Accuracy of monitoring from such satellite images is hindered due to uncertainties of sensor data of relevant enviromental parameters, such as, relative humidity, temperature, wind speed and wind direction . Belief Rule Based Expert System (BRBES) is an efficient algorithm to address these uncertainties. Convolutional Neural Network (CNN) is suitable for image analytics. Hence, we propose a novel model by integrating CNN with BRBES to monitor air quality from satellite images with improved accuracy. We customized CNN and optimized BRBES to increase monitoring accuracy further. An obscure image has been differentiated between polluted air and cloud in our model. Valid environmental data (temperature, wind speed and wind direction) have been adopted to further strengthen the monitoring performance of our proposed model. Three-year observation data (satellite images and environmental parameters) from 2014 to 2016 of Shanghai have been employed to analyze and design our proposed model. The results conclude that the accuracy of our model to monitor PM2.5 of Shanghai is higher than only CNN and other conventional Machine Learning methods. Real-time validation of our model on near real-time satellite images of April-2021 of Shanghai shows average difference between our calculated PM2.5 concentrations and the actual one within ±5.51.
  •  
47.
  • Kalsyte, Zivile, et al. (författare)
  • A novel approach to designing an adaptive committee applied to predicting company’s future performance
  • 2013
  • Ingår i: Expert systems with applications. - Oxford : Pergamon Press. - 0957-4174 .- 1873-6793. ; 40:6, s. 2051-2057
  • Tidskriftsartikel (refereegranskat)abstract
    • This article presents an approach to designing an adaptive, data dependent, committee of models applied to prediction of several financial attributes for assessing company's future performance. Current liabilities/Current assets, Total liabilities/Total assets, Net income/Total assets, and Operating Income/Total liabilities are the attributes used in this paper. A self-organizing map (SOM) used for data mapping and analysis enables building committees, which are specific (committee size and aggregation weights) for each SOM node. The number of basic models aggregated into a committee and the aggregation weights depend on accuracy of basic models and their ability to generalize in the vicinity of the SOM node. A random forest is used a basic model in this study. The developed technique was tested on data concerning companies from ten sectors of the healthcare industry of the United States and compared with results obtained from averaging and weighted averaging committees. The proposed adaptivity of a committee size and aggregation weights led to a statistically significant increase in prediction accuracy if compared to other types of committees. © 2012 Elsevier Ltd. All rights reserved.
  •  
48.
  • Kalsyte, Zivile, et al. (författare)
  • A novel approach to exploring company’s financial soundness : Investor’s perspective
  • 2013
  • Ingår i: Expert systems with applications. - Oxford : Pergamon Press. - 0957-4174 .- 1873-6793. ; 40:13, s. 5085-5092
  • Tidskriftsartikel (refereegranskat)abstract
    • Prediction of company's life cycle stage change; creation of an ordered 2D map allowing to explore company's financial soundness from a rating agency perspective; and prediction of trends of main valuation attributes usually used by investors are the main objectives of this article. The developed algorithms are based on a random forest (RF) and a nonlinear data mapping technique ''t-distributed stochastic neighbor embedding''. Information from five different perspectives, namely balance, income, cash flow, stock price, and risk indicators was aggregated via proximity matrices of RF to enable exploration of company's financial soundness from a rating agency perspective. The proposed use of information not only from companies' financial statements but also from the stock price and risk indicators perspectives has proved useful in creating ordered 2D maps of rated companies. The companies were well ordered according to the credit risk rating assigned by the Moody's rating agency. Results of experimental investigations substantiate that the developed models are capable of predicting short term trends of the main valuation attributes, providing valuable information for investors, with low error. The models reflect financial soundness of actions taken by company's management team. It was also found that company's life cycle stage change can be determined with the average accuracy of 72.7%. Bearing in mind fuzziness of the transition moment, the obtained prediction accuracy is rather encouraging. © 2013 Elsevier Ltd. All rights reserved.
  •  
49.
  • Karlsson, Samuel, et al. (författare)
  • D+∗: A risk aware platform agnostic heterogeneous path planner
  • 2023
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 215
  • Forskningsöversikt (refereegranskat)abstract
    • This article establishes the novel D+*, , a risk-aware and platform-agnostic heterogeneous global path planner for robotic navigation in complex environments. The proposed planner addresses a fundamental bottleneck of occupancy-based path planners related to their dependency on accurate and dense maps. More specifically, their performance is highly affected by poorly reconstructed or sparse areas (e.g. holes in the walls or ceilings) leading to faulty generated paths outside the physical boundaries of the 3-dimensional space. As it will be presented, D+* addresses this challenge with three novel contributions, integrated into one solution, namely: (a) the proximity risk, (b) the modeling of the unknown space, and (c) the map updates. By adding a risk layer to spaces that are closer to the occupied ones, some holes are filled, and thus the problematic short-cutting through them to the final goal is prevented. The novel established D+*  also provides safety marginals to the walls and other obstacles, a property that results in paths that do not cut the corners that could potentially disrupt the platform operation. D+*  has also the capability to model the unknown space as risk-free areas that should keep the paths inside, e.g in a tunnel environment, and thus heavily reducing the risk of larger shortcuts through openings in the walls. D+* is also introducing a dynamic map handling capability that continuously updates with the latest information acquired during the map building process, allowing the planner to use constant map growth and resolve cases of planning over outdated sparser map reconstructions. The proposed path planner is also capable to plan 2D and 3D paths by only changing the input map to a 2D or 3D map and it is independent of the dynamics of the robotic platform. The efficiency of the proposed scheme is experimentally evaluated in multiple real-life experiments where D+* is producing successfully proper planned paths, either in 2D in the use case of the Boston dynamics Spot robot or 3D paths in the case of an unmanned areal vehicle in varying and challenging scenarios.
  •  
50.
  • Kazemi, Samira, et al. (författare)
  • Open Data for Anomaly Detection in Maritime Surveillance
  • 2013
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 40:14, s. 5719-5729
  • Tidskriftsartikel (refereegranskat)abstract
    • Maritime Surveillance has received increased attention from a civilian perspective in recent years. Anomaly detection is one of many techniques available for improving the safety and security in this domain. Maritime authorities use confidential data sources for monitoring the maritime activities; however, a paradigm shift on the Internet has created new open sources of data. We investigate the potential of using open data as a complementary resource for anomaly detection in maritime surveillance. We present and evaluate a decision support system based on open data and expert rules for this purpose. We conduct a case study in which experts from the Swedish coastguard participate to conduct a real-world validation of the system. We conclude that the exploitation of open data as a complementary resource is feasible since our results indicate improvements in the efficiency and effectiveness of the existing surveillance systems by increasing the accuracy and covering unseen aspects of maritime activities.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 112
Typ av publikation
tidskriftsartikel (107)
forskningsöversikt (5)
Typ av innehåll
refereegranskat (110)
övrigt vetenskapligt/konstnärligt (2)
Författare/redaktör
Verikas, Antanas, 19 ... (9)
Bacauskiene, Marija (8)
Nikolakopoulos, Geor ... (7)
Gelzinis, Adas (7)
Tiwari, Prayag, 1991 ... (5)
Boström, Henrik (4)
visa fler...
Nowaczyk, Sławomir, ... (3)
Johnsson, Magnus (3)
Verikas, Antanas (3)
Vaiciukynas, Evaldas (3)
Gil, David (3)
Ng, Amos H. C. (2)
Papapetrou, Panagiot ... (2)
Ottersten, Björn, 19 ... (2)
Andersson, Karl, 197 ... (2)
Lavesson, Niklas (2)
Vinuesa, Ricardo (2)
Mansouri, Sina Shari ... (2)
Kanellakis, Christof ... (2)
Hilletofth, Per (2)
Johansson, Ulf (2)
Englund, Cristofer (2)
Aouada, Djamila (2)
Boldt, Martin (2)
Borg, Anton (2)
Barua, Shaibal (2)
Pashami, Sepideh, 19 ... (2)
Lundström, Jens, 198 ... (2)
Bouguelia, Mohamed-R ... (2)
Patriksson, Michael, ... (2)
Sheikholharam Mashha ... (2)
Bandaru, Sunith (2)
Rydén, Patrik (2)
Deb, Kalyanmoy (2)
Lindström, Erik (2)
Papadimitriou, Andre ... (2)
Asadi, M. (2)
Kourentzes, Nikolaos (2)
Uloza, Virgilijus (2)
Bagloee, S. A. (2)
Bahnsen, Alejandro C ... (2)
Nystrup, Peter (2)
Eivazi, Hamidreza (2)
Löfström, Tuwe, 1977 ... (2)
Koval, Anton (2)
Deegalla, Sampath (2)
Walgama, Keerthi (2)
Saberi-Movahed, Fari ... (2)
Fries, Niklas (2)
Ning, Xin (2)
visa färre...
Lärosäte
Högskolan i Halmstad (22)
Kungliga Tekniska Högskolan (13)
Luleå tekniska universitet (12)
Jönköping University (10)
Chalmers tekniska högskola (9)
Umeå universitet (7)
visa fler...
Mälardalens universitet (7)
Linköpings universitet (7)
Lunds universitet (7)
Göteborgs universitet (6)
Högskolan i Skövde (6)
Blekinge Tekniska Högskola (5)
RISE (4)
Uppsala universitet (3)
Stockholms universitet (2)
Högskolan i Gävle (2)
Örebro universitet (2)
Malmö universitet (2)
Högskolan i Borås (2)
Karlstads universitet (2)
Karolinska Institutet (2)
Sveriges Lantbruksuniversitet (2)
Handelshögskolan i Stockholm (1)
Mittuniversitetet (1)
Linnéuniversitetet (1)
Högskolan Dalarna (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (112)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (77)
Teknik (42)
Samhällsvetenskap (9)
Medicin och hälsovetenskap (3)
Lantbruksvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy