SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:0957 4174 "

Sökning: L773:0957 4174

  • Resultat 1-50 av 108
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Afzal, Wasif, et al. (författare)
  • On the application of genetic programming for software engineering predictive modeling : A systematic review
  • 2011
  • Ingår i: Expert Systems with Applications. - : Pergamon-Elsevier Science Ltd. - 0957-4174 .- 1873-6793. ; 38:9, s. 11984-11997
  • Forskningsöversikt (refereegranskat)abstract
    • The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.
  •  
2.
  • Aler, Ricardo, et al. (författare)
  • Study of Hellinger Distance as a splitting metric for Random Forests in balanced and imbalanced classification datasets
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 149
  • Tidskriftsartikel (refereegranskat)abstract
    • Hellinger Distance (HD) is a splitting metric that has been shown to have an excellent performance for imbalanced classification problems for methods based on Bagging of trees, while also showing good performance for balanced problems. Given that Random Forests (RF) use Bagging as one of two fundamental techniques to create diversity in the ensemble, it could be expected that HD is also effective for this ensemble method. The main aim of this article is to carry out an extensive investigation on important aspects about the use of HD in RF, including handling of multi-class problems, hyper-parameter optimization, metrics comparison, probability estimation, and metrics combination. In particular, HD is compared to other commonly used splitting metrics (Gini and Gain Ratio) in several contexts: balanced/imbalanced and two-class/multi-class. Two aspects related to classification problems are assessed: classification itself and probability estimation. HD is defined for two-class problems, but there are several ways in which it can be extended to deal with multi-class and this article studies the performance of the available options. Finally, even though HD can be used as an alternative to other splitting metrics, there is no reason to limit RF to use just one of them. Therefore, the final study of this article is to determine whether selecting the splitting metric using cross-validation on the training data can improve results further. Results show HD to be a robust measure for RF, with some weakness for balanced multi-class datasets (especially for probability estimation). Combination of metrics is able to result in a more robust performance. However, experiments of HD with text datasets show Gini to be more suitable than HD for this kind of problems.
  •  
3.
  • Alexandersson, Erik (författare)
  • Cross-domain transfer learning for weed segmentation and mapping in precision farming using ground and UAV images
  • 2024
  • Ingår i: Expert Systems with Applications. - 0957-4174. ; 246
  • Tidskriftsartikel (refereegranskat)abstract
    • Weed and crop segmentation is becoming an increasingly integral part of precision farming that leverages the current computer vision and deep learning technologies. Research has been extensively carried out based on images captured with a camera from various platforms. Unmanned aerial vehicles (UAVs) and ground-based vehicles including agricultural robots are the two popular platforms for data collection in fields. They all contribute to site-specific weed management (SSWM) to maintain crop yield. Currently, the data from these two platforms is processed separately, though sharing the same semantic objects (weed and crop). In our paper, we have proposed a novel method with a new deep learning-based model and the enhanced data augmentation pipeline to train field images alone and subsequently predict both field images and UAV images for weed segmentation and mapping. The network learning process is visualized by feature maps at shallow and deep layers. The results show that the mean intersection of union (IOU) values of the segmentation for the crop (maize), weeds, and soil background in the developed model for the field dataset are 0.744, 0.577, 0.979, respectively, and the performance of aerial images from an UAV with the same model, the IOU values of the segmentation for the crop (maize), weeds and soil background are 0.596, 0.407, and 0.875, respectively. To estimate the effect on the use of plant protection agents, we quantify the relationship between herbicide spraying saving rate and grid size (spraying resolution) based on the predicted weed map. The spraying saving rate is up to 90 % when the spraying resolution is at 1.78 × 1.78 cm2 . The study shows that the developed deep convolutional neural network could be used to classify weeds from both field and aerial images and delivers satisfactory results. To achieve this performance, it is crucial to perform preprocessing techniques that reduce dataset differences between two distinct domains.
  •  
4.
  • Altarabichi, Mohammed Ghaith, 1981-, et al. (författare)
  • Fast Genetic Algorithm for feature selection — A qualitative approximation approach
  • 2023
  • Ingår i: Expert systems with applications. - Oxford : Elsevier. - 0957-4174 .- 1873-6793. ; 211
  • Tidskriftsartikel (refereegranskat)abstract
    • Evolutionary Algorithms (EAs) are often challenging to apply in real-world settings since evolutionary computations involve a large number of evaluations of a typically expensive fitness function. For example, an evaluation could involve training a new machine learning model. An approximation (also known as meta-model or a surrogate) of the true function can be used in such applications to alleviate the computation cost. In this paper, we propose a two-stage surrogate-assisted evolutionary approach to address the computational issues arising from using Genetic Algorithm (GA) for feature selection in a wrapper setting for large datasets. We define “Approximation Usefulness” to capture the necessary conditions to ensure correctness of the EA computations when an approximation is used. Based on this definition, we propose a procedure to construct a lightweight qualitative meta-model by the active selection of data instances. We then use a meta-model to carry out the feature selection task. We apply this procedure to the GA-based algorithm CHC (Cross generational elitist selection, Heterogeneous recombination and Cataclysmic mutation) to create a Qualitative approXimations variant, CHCQX. We show that CHCQX converges faster to feature subset solutions of significantly higher accuracy (as compared to CHC), particularly for large datasets with over 100K instances. We also demonstrate the applicability of the thinking behind our approach more broadly to Swarm Intelligence (SI), another branch of the Evolutionary Computation (EC) paradigm with results of PSOQX, a qualitative approximation adaptation of the Particle Swarm Optimization (PSO) method. A GitHub repository with the complete implementation is available. © 2022 The Author(s)
  •  
5.
  • Argyrou, Argyris, et al. (författare)
  • A semi-supervised tool for clustering accounting databases with applications to internal controls
  • 2011
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 38:9, s. 11176-11181
  • Tidskriftsartikel (refereegranskat)abstract
    • A considerable body of literature attests to the significance of internal controls; however, little is known on how the clustering of accounting databases can function as an internal control procedure. To explore this issue further, this paper puts forward a semi-supervised tool that is based on self-organizing map and the IASB XBRL Taxonomy. The paper validates the proposed tool via a series of experiments on an accounting database provided by a shipping company. Empirical results suggest the tool can cluster accounting databases in homogeneous and well-separated clusters that can be interpreted within an accounting context. Further investigations reveal that the tool can compress a large number of similar transactions, and also provide information comparable to that of financial statements. The findings demonstrate that the tool can be applied to verify the processing of accounting transactions as well as to assess the accuracy of financial statements, and thus supplement internal controls.
  •  
6.
  • Bacauskiene, Marija, et al. (författare)
  • Random forests based monitoring of human larynx using questionnaire data
  • 2012
  • Ingår i: Expert systems with applications. - Amsterdam : Elsevier. - 0957-4174 .- 1873-6793. ; 39:5, s. 5506-5512
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is concerned with soft computing techniques-based noninvasive monitoring of human larynx using subject’s questionnaire data. By applying random forests (RF), questionnaire data are categorized into a healthy class and several classes of disorders including: cancerous, noncancerous, diffuse, nodular, paralysis, and an overall pathological class. The most important questionnaire statements are determined using RF variable importance evaluations. To explore data represented by variables used by RF, the t-distributed stochastic neighbor embedding (t-SNE) and the multidimensional scaling (MDS) are applied to the RF data proximity matrix. When testing the developed tools on a set of data collected from 109 subjects, the 100% classification accuracy was obtained on unseen data in binary classification into the healthy and pathological classes. The accuracy of 80.7% was achieved when classifying the data into the healthy, cancerous, noncancerous classes. The t-SNE and MDS mapping techniques applied allow obtaining two-dimensional maps of data and facilitate data exploration aimed at identifying subjects belonging to a “risk group”. It is expected that the developed tools will be of great help in preventive health care in laryngology.
  •  
7.
  • Bagloee, S. A., et al. (författare)
  • A hybrid machine-learning and optimization method for contraflow design in post-disaster cases and traffic management scenarios
  • 2019
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 124, s. 67-81
  • Tidskriftsartikel (refereegranskat)abstract
    • The growing number of man-made and natural disasters in recent years has made the disaster management a focal point of interest and research. To assist and streamline emergency evacuation, changing the directions of the roads (called contraflow, a traffic control measure) is proven to be an effective, quick and affordable scheme in the action list of the disaster management. The contraflow is computationally a challenging problem (known as NP-hard), hence developing an efficient method applicable to real-world and large-sized cases is a significant challenge in the literature. To cope with its complexities and to tailor to practical applications, a hybrid heuristic method based on a machine-learning model and bilevel optimization is developed. The idea is to try and test several contraflow scenarios providing a training dataset for a supervised learning (regression) model which is then used in an optimization framework to find a better scenario in an iterative process. This method is coded as a single computer program synchronized with GAMS (for optimization), MATLAB (for machine learning), EMME3 (for traffic simulation), MS-Access (for data storage) and MS-Excel (as an interface), and it is tested using a real dataset from Winnipeg, and Sioux-Falls as benchmarks. The algorithm managed to find globally optimal solutions for the Sioux-Falls example and improved accessibility to the dense and congested central areas of Winnipeg just by changing the direction of some roads.
  •  
8.
  • Bagloee, Saeed Asadi, et al. (författare)
  • A hybrid machine-learning and optimization method to solve bi-level problems
  • 2018
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 95, s. 142-152
  • Tidskriftsartikel (refereegranskat)abstract
    • © 2017 Elsevier Ltd Bi-level optimization has widespread applications in many disciplines including management, economy, energy, and transportation. Because it is by nature a NP-hard problem, finding an efficient and reliable solution method tailored to large sized cases of specific types is of the highest importance. To this end, we develop a hybrid method based on machine-learning and optimization. For numerical tests, we set up a highly challenging case: a nonlinear discrete bi-level problem with equilibrium constraints in transportation science, known as the discrete network design problem. The hybrid method transforms the original problem to an integer linear programing problem based on a supervised learning technique and a tractable nonlinear problem. This methodology is tested using a real dataset in which the results are found to be highly promising. For the machine learning tasks we employ MATLAB and to solve the optimization problems, we use GAMS (with CPLEX solver).
  •  
9.
  • Bagloee, S. A., et al. (författare)
  • Minimization of water pumps' electricity usage: A hybrid approach of regression models with optimization
  • 2018
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 107, s. 222-242
  • Tidskriftsartikel (refereegranskat)abstract
    • Due to pervasive deployment of electricity-propelled water-pumps, water distribution systems (WDSs) are energy-intensive technologies which are largely operated and controlled by engineers based on their judgments and discretions. Hence energy efficiency in the water sector is a serious concern. To this end, this study is dedicated to the optimal operation of the WDS which is articulated as minimization of the pumps' energy consumption while maintaining flow, pressure, and tank water levels at a minimum level, also known as pump scheduling problem (PSP). This problem is proved to be NP-hard (i.e. a difficult problem computationally). We therefore develop a hybrid methodology incorporating machine-learning techniques as well as optimization methods to address real-life and large-sized WDSs. Other main contributions of this research are (i) in addition to fixed-speed pumps, the variable-speed pumps are optimally controlled, (ii) and operational rules such as water allocation rules can also be explicitly considered in the methodology. This methodology is tested using a large dataset in which the results are found to be highly promising. This methodology has been coded as a user-friendly software composed of MS-Excel (as a user interface), MS-Access (a database), MATLAB (for machine-learning), GAMS (with CPLEX solver for solving optimization problem) and EPANET (to solve hydraulic models). (C) 2018 Elsevier Ltd. All rights reserved.
  •  
10.
  • Bahnsen, Alejandro Correa, et al. (författare)
  • Example-dependent cost-sensitive decision trees
  • 2015
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 42:19, s. 6609-6619
  • Tidskriftsartikel (refereegranskat)abstract
    • Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 
  •  
11.
  • Bahnsen, Alejandro Correa, et al. (författare)
  • Feature engineering strategies for credit card fraud detection
  • 2016
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 51, s. 134-142
  • Tidskriftsartikel (refereegranskat)abstract
    • Every year billions of Euros are lost worldwide due to credit card fraud. Thus, forcing financial institutions to continuously improve their fraud detection systems. In recent years, several studies have proposed the use of machine learning and data mining techniques to address this problem. However, most studies used some sort of misclassification measure to evaluate the different solutions, and do not take into account the actual financial costs associated with the fraud detection process. Moreover, when constructing a credit card fraud detection model, it is very important how to extract the right features from the transactional data. This is usually done by aggregating the transactions in order to observe the spending behavioral patterns of the customers. In this paper we expand the transaction aggregation strategy, and propose to create a new set of features based on analyzing the periodic behavior of the time of a transaction using the von Mises distribution. Then, using a real credit card fraud dataset provided by a large European card processing company, we compare state-of-the-art credit card fraud detection models, and evaluate how the different sets of features have an impact on the results. By including the proposed periodic features into the methods, the results show an average increase in savings of 13%. (C) 2016 Elsevier Ltd. All rights reserved.
  •  
12.
  • Bandaru, Sunith, et al. (författare)
  • Data mining methods for knowledge discovery in multi-objective optimization : Part A - Survey
  • 2017
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 70, s. 139-159
  • Forskningsöversikt (refereegranskat)abstract
    • Real-world optimization problems typically involve multiple objectives to be optimized simultaneously under multiple constraints and with respect to several variables. While multi-objective optimization itself can be a challenging task, equally difficult is the ability to make sense of the obtained solutions. In this two-part paper, we deal with data mining methods that can be applied to extract knowledge about multi-objective optimization problems from the solutions generated during optimization. This knowledge is expected to provide deeper insights about the problem to the decision maker, in addition to assisting the optimization process in future design iterations through an expert system. The current paper surveys several existing data mining methods and classifies them by methodology and type of knowledge discovered. Most of these methods come from the domain of exploratory data analysis and can be applied to any multivariate data. We specifically look at methods that can generate explicit knowledge in a machine-usable form. A framework for knowledge-driven optimization is proposed, which involves both online and offline elements of knowledge discovery. One of the conclusions of this survey is that while there are a number of data mining methods that can deal with data involving continuous variables, only a few ad hoc methods exist that can provide explicit knowledge when the variables involved are of a discrete nature. Part B of this paper proposes new techniques that can be used with such datasets and applies them to discrete variable multi-objective problems related to production systems. 
  •  
13.
  • Bandaru, Sunith, et al. (författare)
  • Data mining methods for knowledge discovery in multi-objective optimization : Part B - New developments and applications
  • 2017
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 70, s. 119-138
  • Tidskriftsartikel (refereegranskat)abstract
    • The first part of this paper served as a comprehensive survey of data mining methods that have been used to extract knowledge from solutions generated during multi-objective optimization. The current paper addresses three major shortcomings of existing methods, namely, lack of interactiveness in the objective space, inability to handle discrete variables and inability to generate explicit knowledge. Four data mining methods are developed that can discover knowledge in the decision space and visualize it in the objective space. These methods are (i) sequential pattern mining, (ii) clustering-based classification trees, (iii) hybrid learning, and (iv) flexible pattern mining. Each method uses a unique learning strategy to generate explicit knowledge in the form of patterns, decision rules and unsupervised rules. The methods are also capable of taking the decision maker's preferences into account to generate knowledge unique to preferred regions of the objective space. Three realistic production systems involving different types of discrete variables are chosen as application studies. A multi-objective optimization problem is formulated for each system and solved using NSGA-II to generate the optimization datasets. Next, all four methods are applied to each dataset. In each application, the methods discover similar knowledge for specified regions of the objective space. Overall, the unsupervised rules generated by flexible pattern mining are found to be the most consistent, whereas the supervised rules from classification trees are the most sensitive to user-preferences. 
  •  
14.
  • Barrow, Devon, et al. (författare)
  • Automatic robust estimation for exponential smoothing : Perspectives from statistics and machine learning
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 160
  • Tidskriftsartikel (refereegranskat)abstract
    • A major challenge in automating the production of a large number of forecasts, as often required in many business applications, is the need for robust and reliable predictions. Increased noise, outliers and structural changes in the series, all too common in practice, can severely affect the quality of forecasting. We investigate ways to increase the reliability of exponential smoothing forecasts, the most widely used family of forecasting models in business forecasting. We consider two alternative sets of approaches, one stemming from statistics and one from machine learning. To this end, we adapt M-estimators, boosting and inverse boosting to parameter estimation for exponential smoothing.  We propose appropriate modifications that are necessary for time series forecasting while aiming to obtain scalable algorithms. We evaluate the various estimation methods using multiple real datasets and find that several approaches outperform the widely used maximum likelihood estimation. The novelty of this work lies in (1) demonstrating the usefulness of M-estimators, (2) and of inverse boosting, which outperforms standard boosting approaches, and (3) a comparative look at statistics versus machine learning inspired approaches.
  •  
15.
  • Barua, Shaibal, et al. (författare)
  • Automatic driver sleepiness detection using EEG, EOG and contextual information
  • 2019
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 115, s. 121-135
  • Tidskriftsartikel (refereegranskat)abstract
    • The many vehicle crashes that are caused by driver sleepiness each year advocates the development of automated driver sleepiness detection (ADSD) systems. This study proposes an automatic sleepiness classification scheme designed using data from 30 drivers who repeatedly drove in a high-fidelity driving simulator, both in alert and in sleep deprived conditions. Driver sleepiness classification was performed using four separate classifiers: k-nearest neighbours, support vector machines, case-based reasoning, and random forest, where physiological signals and contextual information were used as sleepiness indicators. The subjective Karolinska sleepiness scale (KSS) was used as target value. An extensive evaluation on multiclass and binary classifications was carried out using 10-fold cross-validation and leave-one-out validation. With 10-fold cross-validation, the support vector machine showed better performance than the other classifiers (79% accuracy for multiclass and 93% accuracy for binary classification). The effect of individual differences was also investigated, showing a 10% increase in accuracy when data from the individual being evaluated was included in the training dataset. Overall, the support vector machine was found to be the most stable classifier. The effect of adding contextual information to the physiological features improved the classification accuracy by 4% in multiclass classification and by and 5% in binary classification.
  •  
16.
  • Begum, Shahina, et al. (författare)
  • Classification of physiological signals for wheel loader operators using Multi-scale Entropy analysis and case-based reasoning
  • 2014
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 41:2, s. 295-305
  • Tidskriftsartikel (refereegranskat)abstract
    • Sensor signal fusion is becoming increasingly important in many areas including medical diagnosis and classification. Today, clinicians/experts often do the diagnosis of stress, sleepiness and tiredness on the basis of information collected from several physiological sensor signals. Since there are large individual variations when analyzing the sensor measurements and systems with single sensor, they could easily be vulnerable to uncertain noises/interferences in such domain; multiple sensors could provide more robust and reliable decision. Therefore, this paper presents a classification approach i.e. Multivariate Multiscale Entropy Analysis-Case-Based Reasoning (MMSE-CBR) that classifies physiological parameters of wheel loader operators by combining CBR approach with a data level fusion method named Multivariate Multiscale Entropy (MMSE). The MMSE algorithm supports complexity analysis of multivariate biological recordings by aggregating several sensor measurements e.g., Inter-beat-Interval (IBI) and Heart Rate (HR) from Electrocardiogram (ECG), Finger Temperature (FT), Skin Conductance (SC) and Respiration Rate (RR). Here, MMSE has been applied to extract features to formulate a case by fusing a number of physiological signals and the CBR approach is applied to classify the cases by retrieving most similar cases from the case library. Finally, the proposed approach i.e. MMSE-CBR has been evaluated with the data from professional drivers at Volvo Construction Equipment, Sweden. The results demonstrate that the proposed system that fuses information at data level could classify 'stressed' and 'healthy' subjects 83.33% correctly compare to an expert's classification. Furthermore, with another data set the achieved accuracy (83.3%) indicates that it could also classify two different conditions 'adapt' (training) and 'sharp' (real-life driving) for the wheel loader operators. Thus, the new approach of MMSE-CBR could support in classification of operators and may be of interest to researchers developing systems based on information collected from different sensor sources.
  •  
17.
  • Beikmohammadi, Ali, 1995-, et al. (författare)
  • SWP-LeafNET : A novel multistage approach for plant leaf identification based on deep CNN
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 202
  • Tidskriftsartikel (refereegranskat)abstract
    • Modern scientific and technological advances allow botanists to use computer vision-based approaches for plant identification tasks. These approaches have their own challenges. Leaf classification is a computer-vision task performed for the automated identification of plant species, a serious challenge due to variations in leaf morphology, including its size, texture, shape, and venation. Researchers have recently become more inclined toward deep learning-based methods rather than conventional feature-based methods due to the popularity and successful implementation of deep learning methods in image analysis, object recognition, and speech recognition.In this paper, to have an interpretable and reliable system, a botanist’s behavior is modeled in leaf identification by proposing a highly-efficient method of maximum behavioral resemblance developed through three deep learning-based models. Different layers of the three models are visualized to ensure that the botanist’s behavior is modeled accurately. The first and second models are designed from scratch. Regarding the third model, the pre-trained architecture MobileNetV2 is employed along with the transfer-learning technique. The proposed method is evaluated on two well-known datasets: Flavia and MalayaKew. According to a comparative analysis, the suggested approach is more accurate than hand-crafted feature extraction methods and other deep learning techniques in terms of 99.67% and 99.81% accuracy. Unlike conventional techniques that have their own specific complexities and depend on datasets, the proposed method requires no hand-crafted feature extraction. Also, it increases accuracy as compared with other deep learning techniques. Moreover, SWP-LeafNET is distributable and considerably faster than other methods because of using shallower models with fewer parameters asynchronously.
  •  
18.
  • Borg, Anton, et al. (författare)
  • Detecting serial residential burglaries using clustering
  • 2014
  • Ingår i: Expert Systems with Applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 41:11, s. 5252-5266
  • Tidskriftsartikel (refereegranskat)abstract
    • According to the Swedish National Council for Crime Prevention, law enforcement agencies solved approximately three to five percent of the reported residential burglaries in 2012. Internationally, studies suggest that a large proportion of crimes are committed by a minority of offenders. Law enforcement agencies, consequently, are required to detect series of crimes, or linked crimes. Comparison of crime reports today is difficult as no systematic or structured way of reporting crimes exists, and no ability to search multiple crime reports exist. This study presents a systematic data collection method for residential burglaries. A decision support system for comparing and analysing residential burglaries is also presented. The decision support system consists of an advanced search tool and a plugin-based analytical framework. In order to find similar crimes, law enforcement officers have to review a large amount of crimes. The potential use of the cut-clustering algorithm to group crimes to reduce the amount of crimes to review for residential burglary analysis based on characteristics is investigated. The characteristics used are modus operandi, residential characteristics, stolen goods, spatial similarity, or temporal similarity. Clustering quality is measured using the modularity index and accuracy is measured using the rand index. The clustering solution with the best quality performance score were residential characteristics, spatial proximity, and modus operandi, suggesting that the choice of which characteristic to use when grouping crimes can positively affect the end result. The results suggest that a high quality clustering solution performs significantly better than a random guesser. In terms of practical significance, the presented clustering approach is capable of reduce the amounts of cases to review while keeping most connected cases. While the approach might miss some connections, it is also capable of suggesting new connections. The results also suggest that while crime series clustering is feasible, further investigation is needed.
  •  
19.
  • Borg, Anton, et al. (författare)
  • Using VADER sentiment and SVM for predicting customer response sentiment
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 162
  • Tidskriftsartikel (refereegranskat)abstract
    • Customer support is important to corporate operations, which involves dealing with disgruntled customer and content customers that can have different requirements. As such, it is important to quickly extract the sentiment of support errands. In this study we investigate sentiment analysis in customer support for a large Swedish Telecom corporation. The data set consists of 168,010 e-mails divided into 69,900 conversation threads without any sentiment information available. Therefore, VADER sentiment is used together with a Swedish sentiment lexicon in order to provide initial labeling of the e-mails. The e-mail content and sentiment labels are then used to train two Support Vector Machine models in extracting/classifying the sentiment of e-mails. Further, the ability to predict sentiment of not-yet-seen e-mail responses is investigated. Experimental results show that the LinearSVM model was able to extract sentiment with a mean F1-score of 0.834 and mean AUC of 0.896. Moreover, the LinearSVM algorithm was also able to predict the sentiment of an e-mail one step ahead in the thread (based on the text in the an already sent e-mail) with a mean F1-score of 0.688 and the mean AUC of 0.805. The results indicate a predictable pattern in e-mail conversation that enables predicting the sentiment of a not-yet-seen e-mail. This can be used e.g. to prepare particular actions for customers that are likely to have a negative response. It can also provide feedback on possible sentiment reactions to customer support e-mails. © 2020 Elsevier Ltd
  •  
20.
  • Calikus, Ece, 1990-, et al. (författare)
  • No free lunch but a cheaper supper : A general framework for streaming anomaly detection
  • 2020
  • Ingår i: Expert systems with applications. - Oxford : Elsevier. - 0957-4174 .- 1873-6793. ; 155
  • Tidskriftsartikel (refereegranskat)abstract
    • In recent years, there has been increased research interest in detecting anomalies in temporal streaming data. A variety of algorithms have been developed in the data mining community, which can be divided into two categories (i.e., general and ad hoc). In most cases, general approaches assume the one-size-fits-all solution model where a single anomaly detector can detect all anomalies in any domain.  To date, there exists no single general method that has been shown to outperform the others across different anomaly types, use cases and datasets. On the other hand, ad hoc approaches that are designed for a specific application lack flexibility. Adapting an existing algorithm is not straightforward if the specific constraints or requirements for the existing task change. In this paper, we propose SAFARI, a general framework formulated by abstracting and unifying the fundamental tasks in streaming anomaly detection, which provides a flexible and extensible anomaly detection procedure. SAFARI helps to facilitate more elaborate algorithm comparisons by allowing us to isolate the effects of shared and unique characteristics of different algorithms on detection performance. Using SAFARI, we have implemented various anomaly detectors and identified a research gap that motivates us to propose a novel learning strategy in this work. We conducted an extensive evaluation study of 20 detectors that are composed using SAFARI and compared their performances using real-world benchmark datasets with different properties. The results indicate that there is no single superior detector that works well for every case, proving our hypothesis that "there is no free lunch" in the streaming anomaly detection world. Finally, we discuss the benefits and drawbacks of each method in-depth and draw a set of conclusions to guide future users of SAFARI.
  •  
21.
  • Chen, Xi, et al. (författare)
  • Customized bus route design with pickup and delivery and time windows: Model, case study and comparative analysis
  • 2021
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 168
  • Tidskriftsartikel (refereegranskat)abstract
    • The customized bus (CB) is an emerging type of public transportation system, which not only provides a flexible and reliable demand-responsive service, but also reduces the usage of private car to alleviate traffic congestion in metropolitan cities. The customized bus route design problem (CBRDP) is a crucial procedure in the CB service system designing. In this work, we develop a new type of problem scenario: Multi-Trip Multi-Pickup and Delivery Problem with Time Windows, to describe CBRDP by simultaneously optimizing the operating cost and passenger profit, where excess travel time is introduced to estimate passenger extra cost compared with taxi service, and each vehicle is allowed to perform multiple trips for operational cost savings. To solve this problem, a constructive two-stage heuristic algorithm is presented to obtain the Pareto solution. Taking a benchmark problem and Beijing commuting corridor as case studies, we calculate and compare the monetary and travel costs of CB with other travel modes, and quantitatively confirm that the CB can be a cost-effective choice for passengers.
  •  
22.
  • De Masellis, Riccardo, et al. (författare)
  • Solving reachability problems on data-aware workflows
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 189
  • Tidskriftsartikel (refereegranskat)abstract
    • Recent advances in the field of Business Process Management (BPM) have brought about several suites able to model data objects along with the traditional control flow perspective. Nonetheless, when it comes to formal verification there is still a lack of effective verification tools on imperative data-aware process models and executions: the data perspective is often abstracted away and verification tools are often missing.Automated Planning is one of the core areas of Artificial Intelligence where theoretical investigations and concrete and robust tools have made possible the reasoning about dynamic systems and domains. Moreover planning techniques are gaining popularity in the context of BPM. Starting from these observations, we provide here a concrete framework for formal verification of reachability properties on an expressive, yet empirically tractable class of data-aware process models, an extension of Workflow Nets. Then we provide a rigorous mapping between the semantics of such models and that of three important Automated Planning paradigms: Action Languages, Classical Planning, and Model-Checking. Finally, we perform a comprehensive assessment of the performance of three popular tools supporting the above paradigms in solving reachability problems for imperative data-aware business processes, which paves the way for a theoretically well founded and practically viable exploitation of planning-based techniques on data-aware business processes.
  •  
23.
  • de Morais, Gustavo A. Prudencio, et al. (författare)
  • Robust path-following control design of heavy vehicles based on multiobjective evolutionary optimization
  • 2022
  • Ingår i: Expert systems with applications. - : PERGAMON-ELSEVIER SCIENCE LTD. - 0957-4174 .- 1873-6793. ; 192
  • Tidskriftsartikel (refereegranskat)abstract
    • The ability to deal with systems parametric uncertainties is an essential issue for heavy self-driving vehicles in unconfined environments. In this sense, robust controllers prove to be efficient for autonomous navigation. However, uncertainty matrices for this class of systems are usually defined by algebraic methods which demand prior knowledge of the system dynamics. In this case, the control system designer depends on the quality of the uncertain model to obtain an optimal control performance. This work proposes a robust recursive controller designed via multiobjective optimization to overcome these shortcomings. Furthermore, a local search approach for multiobjective optimization problems is presented. The proposed method applies to any multiobjective evolutionary algorithm already established in the literature. The results presented show that this combination of model-based controller and machine learning improves the effectiveness of the system in terms of robustness, stability and smoothness.
  •  
24.
  • Deegalla, Sampath, et al. (författare)
  • Random subspace and random projection nearest neighbor ensembles for high dimensional data
  • 2022
  • Ingår i: Expert systems with applications. - 0957-4174 .- 1873-6793. ; 191
  • Tidskriftsartikel (refereegranskat)abstract
    • The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher’s discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.
  •  
25.
  • Deegalla, Sampath, et al. (författare)
  • Random subspace and random projection nearest neighbor ensembles for high dimensional data
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 191
  • Tidskriftsartikel (refereegranskat)abstract
    • The random subspace and the random projection methods are investigated and compared as techniques for forming ensembles of nearest neighbor classifiers in high dimensional feature spaces. The two methods have been empirically evaluated on three types of high-dimensional datasets: microarrays, chemoinformatics, and images. Experimental results on 34 datasets show that both the random subspace and the random projection method lead to improvements in predictive performance compared to using the standard nearest neighbor classifier, while the best method to use depends on the type of data considered; for the microarray and chemoinformatics datasets, random projection outperforms the random subspace method, while the opposite holds for the image datasets. An analysis using data complexity measures, such as attribute to instance ratio and Fisher's discriminant ratio, provide some more detailed indications on what relative performance can be expected for specific datasets. The results also indicate that the resulting ensembles may be competitive with state-of-the-art ensemble classifiers; the nearest neighbor ensembles using random projection perform on par with random forests for the microarray and chemoinformatics datasets.
  •  
26.
  • Demirbay, Baris, et al. (författare)
  • Multivariate regression (MVR) and different artificial neural network (ANN) models developed for optical transparency of conductive polymer nanocomposite films
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 207
  • Tidskriftsartikel (refereegranskat)abstract
    • The present study addresses a comparative performance assessment of multivariate regression (MVR) and well-optimized feed-forward, generalized regression and radial basis function neural network models which aimed to predict transmitted light intensity (I-tr) of carbon nanotube (CNT)-loaded polymer nanocomposite films by employing a large set of spectroscopic data collected from photon transmission measurements. To assess prediction performance of each developed model, universally accepted statistical error indices, regression, residual and Taylor diagram analyses were performed. As a novel performance evaluation criterion, 2D kernel density mapping was applied to predicted and experimental I-tr data to visually map out where the correlations are stronger and which data points can be more precisely estimated using the studied models. Employing MVR analysis, empirical equation of I-tr was acquired as a function of only four input elements due to sparseness and repetitive nature of the remaining input variables. Relative importance of each input variable was calculated separately through implementing Garson's algorithm for the best ANN model and mass fraction of CNT nanofillers was found as the most significant input variable. Using interconnection weights and bias values obtained for feed-forward neural network (FFNN) model, a neural predictive formula was derived to model I-tr. in terms of all input variables. 2D kernel density maps computed for each ANN model have shown that correlations between measured data and ANN predicted values are stronger for a specific I-tr range between 0% and 18%. To measure the stability of the ANN models, as a final analysis, 5-fold cross-validation method was applied to whole measurement data and 5 different iterations were additionally performed on each ANN model for 5 different training and test data splits. Statistical results found from 5-fold cross-validation analysis have reaffirmed that FFNN model exhibited outperformed prediction ability over all other ANN models and all FFNN predicted It,. values agreed well with experimental I-tr data. Taken all computational results together, one can adapt our proposed FFNN model and neural predictive formula to predict I-tr of polymer nanocomposite films, which can be made from different polymers and nanofillers, by considering specific data range as presented in this study with statistical details.
  •  
27.
  • Djordjevic, Boban, et al. (författare)
  • An optimisation-based digital twin for automated operation of rail level crossings
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 239
  • Tidskriftsartikel (refereegranskat)abstract
    • Railway level crossings (LCs), as the intersection of road and rail transport, are the weak points in terms of safety, as they are used by different modes of transport. The safety level at LCs can therefore be affected by the behaviour of the users. However, the level of safety can also be affected by failures and errors in the operation of LC equipment. Apart from safety, errors and failures of the LC devices can lead to longer waiting times for road users. As the volume of traffic on rail and road increases, so does the risk that the level of safety will decrease. The increase in traffic volume via LC leads to higher traffic volume on the road and more frequent trains on the rail, which leads to longer waiting times for road users on the LCs. The longer waiting times can disrupt the traffic flow, especially during peak hours when the growing volume of traffic on road and rail increases road user dissatisfaction. Moreover, in the era of Industry 4.0 and Digital Rail, new digital and automated technologies are being introduced to improve rail performance and competitiveness. These technologies are aligned with the LCs and are intended to ensure the efficient operation of LC and the efficient use of LCs by conventional trains as well. To achieve this, a concept is needed that simultaneously monitors and visualises the operation of LC in real time, identifies potential faults and failures of the LC equipment, and updates and monitors the proper operation of LC based on the historical data and information of the operation of LC according to the road traffic volume and the characteristics of the rail traffic and trains. Therefore, in this study, a digital twin system (DT) for rail LC was initiated and built as a concept that can meet the above requirements for proper LC operation in real time. DT of LC includes all components of LC and communication between them to synchronise the operation of LC according to the real-time requirements. The DT system is able to optimise the operation time of LC by monitoring the operation of LC and collecting data to ensure efficient use of LC and reduce unnecessary waiting time for road users. In this paper, the operation time of LCs on Swedish and Taiwanese railways was compared using the developed level crossing optimisation model (OLC). Since the introduction of new signalling concepts requires an improvement of LC operating characteristics and their design, the operating strategies were modelled using the OLC model. The results of the work show that the optimal values of LC operation time are different for the case studies investigated. The replacement of track circuits as detection devices and the introduction of balises can also positively influence the operation time, as well as increasing the speed of trains via LCs. However, due to the formulation of the OLC model, the impact of a longer train length on the operation of LC is not recognised. The OLC model can be used to estimate the real-time operation time of LC under different traffic conditions as well as the impact of different changes and extensions of LC.
  •  
28.
  • Dong, Chenchen, et al. (författare)
  • A complex network-based response method for changes in customer requirements for design processes of complex mechanical products
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 199, s. 117124-117124
  • Tidskriftsartikel (refereegranskat)abstract
    • The soaring demand, inevitable changes, and substantial change costs associated with complex mechanical products (CMPs) have accelerated the need to reasonably and accurately respond to changes in customer requirements during the product design process. However, current related studies cannot provide a simple and intuitive decision reference for decision-makers (DMs) to respond to these changes. In this work, a complex network theory-based methodology is proposed. First, a complex network model of CMPs is constructed; this model is processed unidirectionally through analysis of the constraint relation and affiliation among parts. Second, all nodes in this network are divided into levels, and all feasible change propagation paths are selected by breadth-first search. Furthermore, to quantify the change losses of paths, a novel “change workload” is proposed, which is a comprehensive indicator, and a distinct decision reference. The “change workload” is composed of “network change rate,” “change magnification node rate,” and “change magnification rate,” whose weights are evaluated by The Entropy Method and Technique for Order Preference by Similarity to an Ideal Solution. Due to the independence of the “change workload” from expert experience, the proposed methodology is reasonable and capable of outputting a list of affected parts and a preferred ordering of propagation paths, which could provide clearer and more direct guidance for DMs. This presented method is fully proven through a real-world case study of a wind turbine. 
  •  
29.
  • Ebrahimi, Zahra, et al. (författare)
  • A Review on Deep Learning Methods for ECG Arrhythmia Classification
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793 .- 2590-1885. ; 7
  • Tidskriftsartikel (refereegranskat)abstract
    • Deep Learning (DL) has recently become a topic of study in different applications including healthcare, in which timely detection of anomalies on Electrocardiogram (ECG) can play a vital role in patient monitoring. This paper presents a comprehensive review study on the recent DL methods applied to the ECG signal for the classification purposes. This study considers various types of the DL methods such as Convolutional Neural Network (CNN), Deep Belief Network (DBN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). From the 75 studies reported within 2017 and 2018, CNN is dominantly observed as the suitable technique for feature extraction, seen in 52% of the studies. DL methods showed high accuracy in correct classification of Atrial Fibrillation (AF) (100%), Supraventricular Ectopic Beats (SVEB) (99.8%), and Ventricular Ectopic Beats (VEB) (99.7%) using the GRU/LSTM, CNN, and LSTM, respectively
  •  
30.
  • Eivazi, Hamidreza, et al. (författare)
  • Towards extraction of orthogonal and parsimonious non-linear modes from turbulent flows
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 202, s. 117038-
  • Tidskriftsartikel (refereegranskat)abstract
    • Modal-decomposition techniques are computational frameworks based on data aimed at identifying a low-dimensional space for capturing dominant flow features: the so-called modes. We propose a deep probabilistic-neural-network architecture for learning a minimal and near-orthogonal set of non-linear modes from high-fidelity turbulent-flow data useful for flow analysis, reduced-order modeling and flow control. Our approach is based on beta-variational autoencoders (beta-VAEs) and convolutional neural networks (CNNs), which enable extracting non-linear modes from multi-scale turbulent flows while encouraging the learning of independent latent variables and penalizing the size of the latent vector. Moreover, we introduce an algorithm for ordering VAE-based modes with respect to their contribution to the reconstruction. We apply this method for non-linear mode decomposition of the turbulent flow through a simplified urban environment, where the flow-field data is obtained based on well-resolved large-eddy simulations (LESs). We demonstrate that by constraining the shape of the latent space, it is possible to motivate the orthogonality and extract a set of parsimonious modes sufficient for high-quality reconstruction. Our results show the excellent performance of the method in the reconstruction against linear-theory-based decompositions, where the energy percentage captured by the proposed method from five modes is equal to 87.36% against 32.41% of the POD. Moreover, we compare our method with available AE-based models. We show the ability of our approach in the extraction of near-orthogonal modes with the determinant of the correlation matrix equal to 0.99, which may lead to interpretability.
  •  
31.
  • Ejnarsson, Marcus, et al. (författare)
  • Multi-resolution screening of paper formation variations on production line
  • 2009
  • Ingår i: Expert systems with applications. - Amsterdam : Elsevier. - 0957-4174 .- 1873-6793. ; 36:2, part 2, s. 3144-3152
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper is concerned with a technique for detecting and monitoring abnormal paper formation variations in machine direction online in various frequency regions. A paper web is illuminated by two red diode lasers and the reflected light recorded as two time series of high resolution measurements constitute the input signal to the papermaking process monitoring system. The time series are divided into blocks and each block is analyzed separately. The task is treated as kernel based novelty detection applied to a multi-resolution time series representation obtained from the band-pass filtering of the Fourier power spectrum of the time series block. The frequency content of each frequency region is characterized by a feature vector, which is transformed using the kernel canonical correlation analysis and then categorized into the inlier or outlier class by the novelty detector. The ratio of outlying data points, significantly exceeding the predetermined value, indicates abnormalities in the paper formation. The experimental investigations performed have shown good repetitiveness and stability of the paper formation abnormalities detection results. The tools developed are used for online paper formation monitoring in a paper mill.
  •  
32.
  • Englund, Cristofer, et al. (författare)
  • A novel approach to estimate proximity in a random forest : An exploratory study
  • 2012
  • Ingår i: Expert systems with applications. - Amsterdam : Elsevier BV. - 0957-4174 .- 1873-6793. ; 39:17, s. 13046-13050
  • Tidskriftsartikel (refereegranskat)abstract
    • A data proximity matrix is an important information source in random forests (RF) based data mining, including data clustering, visualization, outlier detection, substitution of missing values, and finding mislabeled data samples. A novel approach to estimate proximity is proposed in this work. The approach is based on measuring distance between two terminal nodes in a decision tree. To assess the consistency (quality) of data proximity estimate, we suggest using the proximity matrix as a kernel matrix in a support vector machine (SVM), under the assumption that a matrix of higher quality leads to higher classification accuracy. It is experimentally shown that the proposed approach improves the proximity estimate, especially when RF is made of a small number of trees. It is also demonstrated that, for some tasks, an SVM exploiting the suggested proximity matrix based kernel, outperforms an SVM based on a standard radial basis function kernel and the standard proximity matrix based kernel.
  •  
33.
  • Englund, Cristofer, et al. (författare)
  • The application of data mining techniques to model visual distraction of bicyclists
  • 2016
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 52, s. 99-107
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a novel approach to modelling visual distraction of bicyclists. A unique bicycle simulator equipped with sensors capable of capturing the behaviour of the bicyclist is presented. While cycling two similar scenario routes, once while simultaneously interacting with an electronic device and once without any electronic device, statistics of the measured speed, head movements, steering angle and bicycle road position along with questionnaire data are captured. These variables are used to model the self-assessed distraction level of the bicyclist. Data mining techniques based on random forests, support vector machines and neural networks are evaluated for the modelling task. Out of the total 71 measured variables a variable selection procedure based on random forests is able to select a fraction of those and consequently improving the modelling performance. By combining the random forest-based variable selection and support vector machine-based modelling technique the best overall performance is achieved. The method shows that with a few observable variables it is possible to use machine learning to model, and thus predict, the distraction level of a bicyclist.
  •  
34.
  • Esmaeili, Leila, et al. (författare)
  • An efficient method to minimize cross-entropy for selecting multi-level threshold values using an improved human mental search algorithm
  • 2021
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 182
  • Tidskriftsartikel (refereegranskat)abstract
    • The minimum cross-entropy (MCIT) is introduced as a multi-level image thresholding approach, but it suffers from time complexity, in particular, when the number of thresholds is high. To address this issue, this paper proposes a novel MCIT-based image thresholding based on improved human mental search (HMS) algorithm, a recently proposed population-based metaheuristic algorithm to tackle complex optimisation problems. To further enhance the efficacy, we improve HMS algorithm, IHMSMLIT, with four improvements, including, adaptively selection of the number of mental searches instead of randomly selection, proposing one-step k-means clustering for region clustering, updating based on global and personal experiences, and proposing a random clustering strategy. To assess our proposed algorithm, we conduct an extensive set of experiments with several state-of-the-art and the most recent approaches on a benchmark set of images and in terms of several criteria including objective function, peak signal to noise ratio (PSNR), feature similarity index (FSIM), structural similarity index (SSIM), and stability analysis. The obtained results apparently demonstrate the competitive performance of our proposed algorithm.
  •  
35.
  • Farouq, Shiraz, 1980-, et al. (författare)
  • A conformal anomaly detection based industrial fleet monitoring framework : A case study in district heating
  • 2022
  • Ingår i: Expert systems with applications. - Oxford : Elsevier. - 0957-4174 .- 1873-6793. ; 201
  • Tidskriftsartikel (refereegranskat)abstract
    • The monitoring infrastructure of an industrial fleet can rely on the so-called unit-level and subfleet-level models to observe the behavior of a target unit. However, such infrastructure has to confront several challenges. First, from an anomaly detection perspective of monitoring a target unit, unit-level and subfleet-level models can give different information about the nature of an anomaly, and which approach or level model is appropriate is not always clear. Second, in the absence of well-understood prior models of unit and subfleet behavior, the choice of a base model at their respective levels, especially in an online/streaming setting, may not be clear. Third, managing false alarms is a major problem. To deal with these challenges, we proposed to rely on the conformal anomaly detection framework. In addition, an ensemble approach was deployed to mitigate the knowledge gap in understanding the underlying data-generating process at the unit and subfleet levels. Therefore, to monitor the behavior of a target unit, a unit-level ensemble model (ULEM) and a subfleet-level ensemble model (SLEM) were constructed, where each member of the respective ensemble is based on a conformal anomaly detector (CAD). However, since the information obtained by these two ensemble models through their p-values may not always agree, a combined ensemble model (CEM) was proposed. The results are based on real-world operational data obtained from district heating (DH) substations. Here, it was observed that CEM reduces the overall false alarms compared to ULEM or SLEM, albeit at the cost of some detection delay. The analysis demonstrated the advantages and limitations of ULEM, SLEM, and CEM. Furthermore, discords obtained from the state-of-the-art matrix-profile (MP) method and the combined calibration scores obtained from ULEM and SLEM were compared in an offline setting. Here, it was observed that SLEM achieved a better overall precision and detection delay. Finally, the different components related to ULEM, SLEM, and CEM were put together into what we refer to as TRANTOR: a conformal anomaly detection based industrial fleet monitoring framework. The proposed framework is expected to enable fleet operators in various domains to improve their monitoring infrastructure by efficiently detecting anomalous behavior and controlling false alarms at the target units. © 2022
  •  
36.
  • Flyckt, Jonatan, et al. (författare)
  • Detecting ditches using supervised learning on high-resolution digital elevation models
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier Ltd. - 0957-4174 .- 1873-6793. ; 201
  • Tidskriftsartikel (refereegranskat)abstract
    • Drained wetlands can constitute a large source of greenhouse gas emissions, but the drainage networks in these wetlands are largely unmapped, and better maps are needed to aid in forest production and to better understand the climate consequences. We develop a method for detecting ditches in high resolution digital elevation models derived from LiDAR scans. Thresholding methods using digital terrain indices can be used to detect ditches. However, a single threshold generally does not capture the variability in the landscape, and generates many false positives and negatives. We hypothesise that, by combining the digital terrain indices using supervised learning, we can improve ditch detection at a landscape-scale. In addition to digital terrain indices, additional features are generated by transforming the data to include neighbouring cells for better ditch predictions. A Random Forests classifier is used to locate the ditches, and its probability output is processed to remove noise, and binarised to produce the final ditch prediction. The confidence interval for the Cohen's Kappa index ranges [0.655, 0.781] between the evaluation plots with a confidence level of 95%. The study demonstrates that combining information from a suite of digital terrain indices using machine learning provides an effective technique for automatic ditch detection at a landscape-scale, aiding in both practical forest management and in combatting climate change. © 2022 The Authors
  •  
37.
  • Fries, Niklas, et al. (författare)
  • A comparison of local explanation methods for high-dimensional industrial data : a simulation study
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 207
  • Tidskriftsartikel (refereegranskat)abstract
    • Prediction methods can be augmented by local explanation methods (LEMs) to perform root cause analysis for individual observations. But while most recent research on LEMs focus on low-dimensional problems, real-world datasets commonly have hundreds or thousands of variables. Here, we investigate how LEMs perform for high-dimensional industrial applications. Seven prediction methods (penalized logistic regression, LASSO, gradient boosting, random forest and support vector machines) and three LEMs (TreeExplainer, Kernel SHAP, and the conditional normal sampling importance (CNSI)) were combined into twelve explanation approaches. These approaches were used to compute explanations for simulated data, and real-world industrial data with simulated responses. The approaches were ranked by how well they predicted the contributions according to the true models. For the simulation experiment, the generalized linear methods provided best explanations, while gradient boosting with either TreeExplainer or CNSI, or random forest with CNSI were robust for all relationships. For the real-world experiment, TreeExplainer performed similarly, while the explanations from CNSI were significantly worse. The generalized linear models were fastest, followed by TreeExplainer, while CNSI and Kernel SHAP required several orders of magnitude more computation time. In conclusion, local explanations can be computed for high-dimensional data, but the choice of statistical tools is crucial.
  •  
38.
  • Fries, Niklas, et al. (författare)
  • Data-driven process adjustment policies for quality improvement
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 237
  • Tidskriftsartikel (refereegranskat)abstract
    • Common objectives in machine learning research are to predict the output quality of manufacturing processes, to perform root cause analysis in case of reduced quality, and to propose intervention strategies. The cost of reduced quality must be weighed against the cost of the interventions, which depend on required downtime, personnel costs, and material costs. Furthermore, there is a risk of false negatives, i.e., failure to identify the true root causes, or false positives, i.e., adjustments that further reduce the quality. A policy for process adjustments describes when and where to perform interventions, and we say that a policy is worthwhile if it reduces the expected operational cost. In this paper, we describe a data-driven alarm and root cause analysis framework, that given a predictive and explanatory model trained on high-dimensional process and quality data, can be used to search for a worthwhile adjustment policy. The framework was evaluated on large-scale simulated process and quality data. We find that worthwhile adjustment policies can be derived also for problems with a large number of explanatory variables. Interestingly, the performance of the adjustment policies is almost exclusively driven by the quality of the model fits. Based on these results, we discuss key areas of future research, and how worthwhile adjustment policies can be implemented in real world applications.
  •  
39.
  • Frumosu, Flavia Dalia, et al. (författare)
  • Cost-sensitive learning classification strategy for predicting product failures
  • 2020
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 161:15
  • Tidskriftsartikel (refereegranskat)abstract
    • In the current era of Industry 4.0, sensor data used in connection with machine learning algorithms can help manufacturing industries to reduce costs and to predict failures in advance. This paper addresses a binary classification problem found in manufacturing engineering, which focuses on how to ensure product quality delivery and at the same time to reduce production costs. The aim behind this problem is to predict the number of faulty products, which in this case is extremely low. As a result of this characteristic, the problem is reduced to an imbalanced binary classification problem. The authors contribute to imbalanced classification research in three important ways. First, the industrial application coming from the electronic manufacturing industry is presented in detail, along with its data and modelling challenges. Second, a modified cost-sensitive classification strategy based on a combination of Voronoi diagrams and genetic algorithm is applied to tackle this problem and is compared to several base classifiers. The results obtained are promising for this specific application. Third, in order to evaluate the flexibility of the strategy, and to demonstrate its wide range of applicability, 25 real-world data sets are selected from the KEEL repository with different imbalance ratios and number of features. The strategy, in this case implemented without a predefined cost, is compared with the same base classifiers as those used for the industrial problem.
  •  
40.
  • Georgoulas, George, et al. (författare)
  • Principal component analysis of the start-up transient and hidden Markov modeling for broken rotor bar fault diagnosis in asynchronous machines
  • 2013
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 40:17, s. 7024-7033
  • Tidskriftsartikel (refereegranskat)abstract
    • This article presents a novel computational method for the diagnosis of broken rotor bars in three phase asynchronous machines. The proposed method is based on Principal Component Analysis (PCA) and is applied to the stator’s three phase start-up current. The fault detection is easier in the start-up transient because of the increased current in the rotor circuit, which amplifies the effects of the fault in the stator’s current independently of the motor’s load. In the proposed fault detection methodology, PCA is initially utilized to extract a characteristic component, which reflects the rotor asymmetry caused by the broken bars. This component can be subsequently processed using Hidden Markov Models (HMMs). Two schemes, a multiclass and a one-class approach are proposed. The efficiency of the novel proposed schemes is evaluated by multiple experimental test cases. The results obtained indicate that the suggested approaches based on the combination of PCA and HMM, can be successfully utilized not only for identifying the presence of a broken bar but also for estimating the severity (number of broken bars) of the fault.
  •  
41.
  • Gerdes, Mike (författare)
  • Decision trees and genetic algorithms for condition monitoring forecasting of aircraft air conditioning
  • 2013
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 40:12, s. 5021-5026
  • Tidskriftsartikel (refereegranskat)abstract
    • Unscheduled maintenance of aircraft can cause significant costs. The machine needs to be repaired before it can operate again. Thus it is desirable to have concepts and methods to prevent unscheduled maintenance. This paper proposes a method for forecasting the condition of aircraft air conditioning system based on observed past data. Forecasting is done in a point by point way, by iterating the algorithm. The proposed method uses decision trees to find and learn patterns in past data and use these patterns to select the best forecasting method to forecast future data points. Forecasting a data point is based on selecting the best applicable approximation method. The selection is done by calculating different features/attributes of the time series and then evaluating the decision tree. A genetic algorithm is used to find the best feature set for the given problem to increase the forecasting performance. The experiments show a good forecasting ability even when the function is disturbed by noise.
  •  
42.
  • Gil, David, et al. (författare)
  • Application of artificial neural networks in the diagnosis of urological dysfunctions
  • 2009
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 36:3, s. 5754-5760
  • Forskningsöversikt (refereegranskat)abstract
    • In this article, we evaluate the work out of some artificial neural network models as tools for support in the medical diagnosis of urological dysfunctions. We develop two types of unsupervised and one supervised neural network. This scheme is meant to help the urologists in obtaining a diagnosis for complex multi-variable diseases and to reduce painful and costly medical treatments since neurological dysfunctions are difficult to diagnose. The clinical study has been carried out using medical registers of patients with urological dysfunctions. The proposal is able to distinguish and classify between ill and healthy patients. (C) 2008 Elsevier Ltd. All rights reserved.
  •  
43.
  • Gil, David, et al. (författare)
  • Predicting seminal quality with artificial intelligence methods
  • 2012
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 39:16, s. 12564-12573
  • Tidskriftsartikel (refereegranskat)abstract
    • Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. (C) 2012 Elsevier Ltd. All rights reserved.
  •  
44.
  •  
45.
  • Gulisano, Vincenzo Massimiliano, 1984, et al. (författare)
  • STONE: A streaming DDoS defense framework
  • 2015
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 42:24, s. 9620-9633
  • Tidskriftsartikel (refereegranskat)abstract
    • Distributed Denial-of-Service (DDoS) attacks aim at rapidly exhausting the communication and computational power of a network target by flooding it with large volumes of malicious traffic. In order to be effective, a DDoS defense mechanism should detect and mitigate threats quickly, while allowing legitimate users access to the attack's target. Nevertheless, defense mechanisms proposed in the literature tend not to address detection and mitigation challenges jointly, but rather focus solely on the detection or the mitigation facet. At the same time, they usually overlook the limitations of centralized defense frameworks that, when deployed physically close to a possible target, become ineffective if DDoS attacks are able to saturate the target's incoming links. This paper presents STONE, a framework with expert system functionality that provides effective and joint DDoS detection and mitigation. STONE characterizes regular network traffic of a service by aggregating it into common prefixes of IP addresses, and detecting attacks when the aggregated traffic deviates from the regular one. Upon detection of an attack, STONE allows traffic from known sources to access the service while discarding suspicious one. STONE relies on the data streaming processing paradigm in order to characterize and detect anomalies in real time. We implemented STONE on top of StreamCloud, an elastic and parallel-distributed stream processing engine. The evaluation, conducted on real network traces, shows that STONE detects DDoS attacks rapidly, provides minimal degradation of legitimate traffic while mitigating a threat, and also exhibits a processing throughput that scales linearly with the number of nodes used to deploy and run it.
  •  
46.
  • Hamedi, Hamidreza, et al. (författare)
  • Measuring Lane-changing Trajectories by Employing Context-based Modified Dynamic Time Warping
  • 2023
  • Ingår i: Expert Systems with Applications. - : Elsevier BV. - 0957-4174. ; 216
  • Tidskriftsartikel (refereegranskat)abstract
    • The spatial lane-changing (LC) behavior should be analyzed for the vehicles in transportation systems in order to identify the patterns of vehicles’ movements using the similarities detected in their lane-changing trajectories. The trajectory of an LC vehicle is a function of its context. The present paper utilized spatial footprints and external/internal contexts to contextualize a measure applicable to the similarities found between LC trajectories. While only the external context of the previous investigation was constrained to the surrounding vehicles on the road, this study has investigated the idea of ​​the contribution of solar radiation to the lane-changing trajectory patterns. The similarities found between multi-dimensional trajectories were determined by offering context-based modified dynamic time warping (CMDTW), and the CMDTW technique with the Next Generation Simulation (NGSIM) dataset was carefully analyzed. The weighting framework used for each dimension made it possible to quantify the similarities between lane-changing trajectories using the AHP technique. The obtained results showed that not only the lane-changing procedure depends on the conditions of the lane changer, but this procedure also depends on the solar radiation and the surrounding vehicles offered as the external contexts. Additionally, by including different dimensions, both internal and external contexts, the similarity results of LC trajectories turn into a more realistic phenomenon. The potential of the context-based modified dynamic time warping algorithm in the detection of a trajectory with the maximum similarity is also enhanced. Furthermore, in order to determine the LC trajectories, we used the Fuzzy C-means (FCM) clustering technique. We utilized Cohen’s kappa for the evaluation of the Fuzzy C-means results, and since the calculated Kappa score exceeds 0.8, the clustering algorithm has an excellent performance. The results obtained by comparing the suggested technique with commonly used similarity measurement techniques indicated that the accuracy of the CMDTW technique outperforms other techniques in the detection of the lane-changing trajectory patterns. The suggested CMDTW method has therefore been effective in the identification of the patterns of lane-changing trajectory.
  •  
47.
  • Hilletofth, Per, et al. (författare)
  • Three novel fuzzy logic concepts applied to reshoring decision-making
  • 2019
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 126, s. 133-143
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper investigates the possibility of increasing the interpretability of fuzzy rules and reducing the complexity when designing fuzzy rules. To achieve this, three novel fuzzy logic concepts (i.e., relative linguistic labels, high-level rules and linguistic variable weights) were conceived and implemented in a fuzzy logic system for reshoring decision-making. The introduced concepts increase the interpretability of fuzzy rules and reduce the complexity when designing fuzzy rules while still providing accurate results.
  •  
48.
  • Hossain, Emam, et al. (författare)
  • Machine learning with Belief Rule-Based Expert Systems to predict stock price movements
  • 2022
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 206
  • Tidskriftsartikel (refereegranskat)abstract
    • Price prediction of financial assets has been a key interest for researchers over the decades. Numerous techniques to predict the price movements have been developed by the researchers over the years. But a model loses its credibility once a large number of traders start using the same technique. Therefore, the traders are in continuous search of new and efficient prediction techniques. In this research, we propose a novel machine learning technique using technical analysis with Belief Rule-Based Expert System (BRBES), and incorporating the concept of Bollinger Band to forecast stock price in the next five days. A Bollinger Event is triggered when the closing price of the stock goes down the Lower Bollinger Band. The BRBES approach has never been applied to stock markets, despite its potential and the appetite of the financial markets for expert systems. We predict the price movement of the Swedish company TELIA as a proof of concept. The knowledge base of the initial BRBES is constructed by simulating the historical data and then the learning parameters are optimized using MATLAB’s fmincon function. We evaluate the performance of the trained BRBES in terms of Accuracy, Area Under ROC Curve, Root Mean Squared Error, type I error, type II error,  value, and profit/loss ratio. We compare our proposed model against a similar rule-based technique, Adaptive Neuro-Fuzzy Inference System (ANFIS), to understand the significance of the improved rule base of BRBES. We also compare the performance against Support Vector Machine (SVM), one of the most popular machine learning techniques, and a simple heuristic model. Finally, the trained BRBES is compared against recent state-of-the-art deep learning approaches to show how competitive the performance of our proposed model is. The results show that the trained BRBES produces better performance than the non-trained BRBES, ANFIS, SVM, and the heuristic approaches. Also, it indicates better or competitive performance against the deep learning approaches. Thus BRBES exhibits its potential in predicting financial asset price movement.
  •  
49.
  • Hosseini, Ahmad, et al. (författare)
  • Connectivity reliability in uncertain networks with stability analysis
  • 2016
  • Ingår i: Expert systems with applications. - : Elsevier BV. - 0957-4174 .- 1873-6793. ; 57, s. 337-344
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper treats the fundamental problems of reliability and stability analysis in uncertain networks. Here, we consider a collapsed, post-disaster, traffic network that is composed of nodes (centers) and arcs (links), where the uncertain operationality or reliability of links is evaluated by domain experts. To ensure the arrival of relief materials and rescue vehicles to the disaster areas in time, uncertainty theory, which neither requires any probability distribution nor fuzzy membership function, is employed to originally propose the problem of choosing the most reliable path (MRP). We then introduce the new problems of α-most reliable path (α-MRP), which aims to minimize the pessimistic risk value of a path under a given confidence level α, and very most reliable path (VMRP), where the objective is to maximize the confidence level of a path under a given threshold of pessimistic risk. Then, exploiting these concepts, we give the uncertainty distribution of the MRP in an uncertain traffic network. The objective of bothα-MRP and VMRP is to determine a path that comprises the least risky route for transportation from a designated source node to a designated sink node, but with different decision criteria. Furthermore, a methodology is proposed to tackle the stability analysis issue in the framework of uncertainty programming; specifically, we show how to compute the arcs’ tolerances. Finally, we provide illustrative examples that show how our approaches work in realistic situation.
  •  
50.
  • Hosseini, Ahmad (författare)
  • Max-type reliability in uncertain post-disaster networks through the lens of sensitivity and stability analysis
  • 2024
  • Ingår i: Expert systems with applications. - : Elsevier. - 0957-4174 .- 1873-6793. ; 241
  • Tidskriftsartikel (refereegranskat)abstract
    • The functionality of infrastructures, particularly in densely populated areas, is greatly impacted by natural disasters, resulting in uncertain networks. Thus, it is important for crisis management professionals and computer-based systems for transportation networks (such as expert systems) to utilize trustworthy data and robust computational methodologies when addressing convoluted decision-making predicaments concerning the design of transportation networks and optimal routes. This study aims to evaluate the vulnerability of paths in post-disaster transportation networks, with the aim of facilitating rescue operations and ensuring the safe delivery of supplies to affected regions. To investigate the problem of links' tolerances in uncertain networks and the resiliency and reliability of paths, an uncertainty theory-based model that employs minmax optimization with a bottleneck objective function is used. The model addresses the uncertain maximum reliable paths problem, which takes into account uncertain risk variables associated with links. Rather than using conventional methods for calculating the deterministic tolerances of a single element in combinatorial optimization, this study introduces a generalization of stability analysis based on tolerances while the perturbations in a group of links are involved. The analysis defines set tolerances that specify the minimum and maximum values that a designated group of links could simultaneously fluctuate while maintaining the optimality of the max-type reliable paths. The study shows that set tolerances can be considered as well-defined and proposes computational methods to calculate or bound such quantities - which were previously unresearched and difficult to measure. The model and methods are demonstrated to be both theoretically and numerically efficient by applying them to four subnetworks from our case study. In conclusion, this study provides a comprehensive approach to addressing uncertainty in reliability problems in networks, with potential applications in various fields.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 108
Typ av publikation
tidskriftsartikel (103)
forskningsöversikt (5)
Typ av innehåll
refereegranskat (108)
Författare/redaktör
Verikas, Antanas, 19 ... (9)
Bacauskiene, Marija (8)
Nikolakopoulos, Geor ... (7)
Gelzinis, Adas (7)
Tiwari, Prayag, 1991 ... (5)
Boström, Henrik (4)
visa fler...
Nowaczyk, Sławomir, ... (3)
Johnsson, Magnus (3)
Verikas, Antanas (3)
Vaiciukynas, Evaldas (3)
Gil, David (3)
Ng, Amos H. C. (2)
Papapetrou, Panagiot ... (2)
Ottersten, Björn, 19 ... (2)
Andersson, Karl, 197 ... (2)
Lavesson, Niklas (2)
Vinuesa, Ricardo (2)
Mansouri, Sina Shari ... (2)
Kanellakis, Christof ... (2)
Hilletofth, Per (2)
Johansson, Ulf (2)
Englund, Cristofer (2)
Aouada, Djamila (2)
Boldt, Martin (2)
Borg, Anton (2)
Barua, Shaibal (2)
Pashami, Sepideh, 19 ... (2)
Lundström, Jens, 198 ... (2)
Bouguelia, Mohamed-R ... (2)
Patriksson, Michael, ... (2)
Sheikholharam Mashha ... (2)
Bandaru, Sunith (2)
Rydén, Patrik (2)
Deb, Kalyanmoy (2)
Lindström, Erik (2)
Papadimitriou, Andre ... (2)
Asadi, M. (2)
Kourentzes, Nikolaos (2)
Uloza, Virgilijus (2)
Bagloee, S. A. (2)
Bahnsen, Alejandro C ... (2)
Nystrup, Peter (2)
Eivazi, Hamidreza (2)
Löfström, Tuwe, 1977 ... (2)
Koval, Anton (2)
Deegalla, Sampath (2)
Walgama, Keerthi (2)
Saberi-Movahed, Fari ... (2)
Fries, Niklas (2)
Ning, Xin (2)
visa färre...
Lärosäte
Högskolan i Halmstad (22)
Luleå tekniska universitet (12)
Kungliga Tekniska Högskolan (11)
Jönköping University (10)
Chalmers tekniska högskola (9)
Mälardalens universitet (7)
visa fler...
Linköpings universitet (7)
Lunds universitet (7)
Göteborgs universitet (6)
Umeå universitet (6)
Högskolan i Skövde (6)
Blekinge Tekniska Högskola (5)
Uppsala universitet (3)
RISE (3)
Stockholms universitet (2)
Högskolan i Gävle (2)
Örebro universitet (2)
Malmö universitet (2)
Högskolan i Borås (2)
Karlstads universitet (2)
Sveriges Lantbruksuniversitet (2)
Handelshögskolan i Stockholm (1)
Mittuniversitetet (1)
Linnéuniversitetet (1)
Högskolan Dalarna (1)
VTI - Statens väg- och transportforskningsinstitut (1)
visa färre...
Språk
Engelska (108)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (74)
Teknik (42)
Samhällsvetenskap (9)
Medicin och hälsovetenskap (3)
Lantbruksvetenskap (2)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy