SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:0884 8173 OR L773:1098 111X "

Sökning: L773:0884 8173 OR L773:1098 111X

  • Resultat 1-15 av 15
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Al Falahi, Kanna, et al. (författare)
  • Models of Influence in Online Social Networks
  • 2014
  • Ingår i: International Journal of Intelligent Systems. - USA : John Wiley & Sons. - 0884-8173 .- 1098-111X. ; 2:29, s. 161-183
  • Tidskriftsartikel (refereegranskat)abstract
    • Online social networks gained their popularity from relationships users can build with each other. These social ties play an important role in asserting users’ behaviors in a social network. For example, a user might purchase a product that his friend recently bought. Such phenomenon is called social influence, which is used to study users’ behavior when the action of one user can affect the behavior of his neighbors in a social network. Social influence is increasingly investigated nowadays as it can help spreading messages widely, particularly in the context of marketing, to rapidly promote products and services based on social friends’ behavior in the network. This wide interest in social influence raises the need to develop models to evaluate the rate of social influence. In this paper, we discuss metrics used to measure influence probabilities. Then, we reveal means to maximize social influence by identifying and using the most influential users in a social network. Along with these contributions, we also survey existing social influence models, and classify them into an original categorization framework. Then, based on our proposed metrics, we show the results of an experimental evaluation to compare the influence power of some of the surveyed salient models used to maximize social influence.
  •  
2.
  • Driankov, Dimiter, 1952- (författare)
  • Towards a many‐valued logic of quantified belief: The information lattice
  • 1991
  • Ingår i: International Journal of Intelligent Systems. - : John Wiley & Sons. - 0884-8173 .- 1098-111X. ; 6:2, s. 135-166
  • Tidskriftsartikel (refereegranskat)abstract
    • In a previous article we introduced extended logical operators, based on the Dubois family of T-norms and their dual T-conorms, to induce a semantics for a language involving and, or, and negation. Thus, given these logical operators and an arbitrary set-up S (a mapping from atomic formulas into a set of truth-values), we extended S to a mapping of all formulas into a set of truth-values defined as belief/disbelief pairs. Then using a particular partial order between belief/disbelief pairs to define entailment we were able to derive a many-valued variant of the so-called relevance logic. Here we introduce the notion of the so-called information lattice built upon another type of partial order between belief/disbelief pairs. Furthermore, we introduce specific meet and join operations and use them to provide answers to three fundamental questions: How does the reasoning machine represent belief and/or disbelief in the validity of the constituents of a complex formula when it is supplied with belief and/or disbelief in the validity of this complex formula as a whole; how does it determine the amount of belief and/or disbelief to be assigned to complex formulas in an epistemic state, that is, a collection of set-ups; and finally, how does it change its present belief and/or disbelief in the validity of formulas already in its data base, when provided with an input bringing in new belief and/or disbelief in the validity of these formulas.
  •  
3.
  • Driankov, Dimiter, 1952- (författare)
  • Uncertainty calculus with verbally defined belief‐intervals
  • 1986
  • Ingår i: International Journal of Intelligent Systems. - : John Wiley & Sons. - 0884-8173 .- 1098-111X. ; 1:4, s. 219-246
  • Tidskriftsartikel (refereegranskat)abstract
    • The intended purpose of the present article is two-fold: first, introducing an interval-like representation of uncertainty that is an adequate summary of the following two items of information: a report on how strongly the validity of a proposition is supported by a body of evidence and a report on how strongly the validity of its negation is supported. A representation of this type is called a beliefinterval and is introduced as a subinterval of a certain verbal scale consisting of nine linguistic estimates expressing the amount of support provided for the validity of a proposition and/or its negation; each linguistic estimate is represented as a fuzzy number in the interval [0,1]. A belief-interval is bounded from below by an estimate indicating the so-called degree of support and from above by an estimate indicating the so-called degree of plauibility. The latter is defined as the difference between a fuzzy number representing the maximal degree of support that might be provided for a proposition in general and a fuzzy number expressing the degree of support provided for the validity of the negation of the proposition under consideration. The so-introduced degrees of support and plausibility of a proposition are subjective measurements provided by the expert on the basis of some negative and/or positive evidence available to him. Thus, these two notions do not have the same measure-based origins as do the set-theoretic measures of support and plausibility proposed by G. Shafer, neither do they coincide with the possibility and necessity measures proposed by L. Zadeh. The main difference is that in our case the degree of plausibility might be, in cases of contradictory beliefs, less than its corresponding degree of support. Three types of belief-intervals are identified on the basis of the different amounts of support that might be provided for the validity of a proposition and/or its negation, namely balanced, unbalanced, and contradictory belief-intervals. The second objective of this article is to propose a calculus for the belief-intervals by extending the usual logical connectives and, or, negation, and implies. Thus, conjunctive and disjunctive operators are introduced using the Dubois' parametrized family of T-norms and their dual T-conorms. The parameter Q characterizing the latter is being interpreted as a measure of the strength of these connectives and further interpretation of the notion of strength is done in the cases of independent and dependent evidence. This leads to the introduction of specific conjunctive and disjunctive operators to be used separately in each of the latter two cases. A negation operator is proposed with the main purpose of determining the belief-interval to be assigned to the negation of a particular proposition, given the belief-interval of the proposition alone. A so-called aggregation operator is introduced with the purpose of aggregating multiple belief-intervals assigned to one and the same proposition into a total belief-interval for this particular proposition. Detachment operators are proposed for determining the belief-interval of a conclusion given the belief-interval of the premise and the one that represents the amount of belief commited to the validity of the inference rule itself. Two different detachment operators are constructed for use in cases when: (1) the presence of the negation of the premise suggests the presence of the negation of the conclusion, and (2) when the presence of the negation of the premise does not tell anything at all with respect to the validity of the conclusion to be drawn.
  •  
4.
  •  
5.
  • Pavlenko, Tatjana, et al. (författare)
  • Credit Risk Modeling Using Bayesian Networks
  • 2010
  • Ingår i: International Journal of Intelligent Systems. - : Hindawi Limited. - 0884-8173 .- 1098-111X. ; 25:4, s. 326-344
  • Tidskriftsartikel (refereegranskat)abstract
    • The main goal of this research is to demonstrate how probabilistic graphs may be used for modeling and assessment of credit concentration risk. The destructive power of credit concentrations essentially depends on the amount of correlation among borrowers. However, borrower companies correlation and concentration of credit risk exposures have been difficult for the banking industry to measure in an objective way as they are riddled with uncertainty. As a result, banks do not manage to make a quantitative link to the correlation driving risks and fail to prevent concentrations from accumulating. In this paper, we argue that Bayesian networks provide an attractive solution to these problems and we show how to apply them in representing, quantifying and managing the uncertain knowledge in concentration of credits risk exposures. We suggest the stepwise Bayesian network model building and show how to incorporate expert-based prior beliefs on the risk exposure of a group of related borrowers, and then update these beliefs through the whole model with the new information. We then explore a specific graph structure, a tree-augmented Bayesian network and show that this model provides better understanding of the risk accumulating due to business links between borrowers. We also present two strategies of model assessment that exploit the measure of mutual information and show that the constructed Bayesian network is a reliable model that can be implemented to identify and control threat from concentration of credit exposures. Finally, we demonstrate that suggested tree-augmented Bayesian network is also suitable for stress-testing analysis, in particular, it can provide the estimates of the posterior risk of losses related to the unfavorable changes in the financial conditions of a group of related borrowers.
  •  
6.
  • Qadri, Syed Furqan, et al. (författare)
  • CT-based automatic spine segmentation using patch-based deep learning
  • 2023
  • Ingår i: International Journal of Intelligent Systems. - : Hindawi Publishing Corporation. - 0884-8173 .- 1098-111X. ; 2023
  • Tidskriftsartikel (refereegranskat)abstract
    • CT vertebral segmentation plays an essential role in various clinical applications, such as computer-assisted surgical interventions, assessment of spinal abnormalities, and vertebral compression fractures. Automatic CT vertebral segmentation is challenging due to the overlapping shadows of thoracoabdominal structures such as the lungs, bony structures such as the ribs, and other issues such as ambiguous object borders, complicated spine architecture, patient variability, and fluctuations in image contrast. Deep learning is an emerging technique for disease diagnosis in the medical field. This study proposes a patch-based deep learning approach to extract the discriminative features from unlabeled data using a stacked sparse autoencoder (SSAE). 2D slices from a CT volume are divided into overlapping patches fed into the model for training. A random under sampling (RUS)-module is applied to balance the training data by selecting a subset of the majority class. SSAE uses pixel intensities alone to learn high-level features to recognize distinctive features from image patches. Each image is subjected to a sliding window operation to express image patches using autoencoder high-level features, which are then fed into a sigmoid layer to classify whether each patch is a vertebra or not. We validate our approach on three diverse publicly available datasets: VerSe, CSI-Seg, and the Lumbar CT dataset. Our proposed method outperformed other models after configuration optimization by achieving 89.9% in precision, 90.2% in recall, 98.9% in accuracy, 90.4% in F-score, 82.6% in intersection over union (IoU), and 90.2% in Dice coefficient (DC). The results of this study demonstrate that our model's performance consistency using a variety of validation strategies is flexible, fast, and generalizable, making it suited for clinical application.
  •  
7.
  • Sonntag, Dag, et al. (författare)
  • Approximate Counting of Graphical Models Via MCMC Revisited
  • 2015
  • Ingår i: International Journal of Intelligent Systems. - : Wiley-Blackwell. - 0884-8173 .- 1098-111X. ; 30:3, s. 384-420
  • Tidskriftsartikel (refereegranskat)abstract
    • We apply MCMC sampling to approximately calculate some quantities, and discuss their implications for learning directed and acyclic graphs (DAGs) from data. Specifically, we calculate the approximate ratio of essential graphs (EGs) to DAGs for up to 31 nodes. Our ratios suggest that the average Markov equivalence class is small. We show that a large majority of the classes seem to have a size that is close to the average size. This suggests that one should not expect more than a moderate gain in efficiency when searching the space of EGs instead of the space of DAGs. We also calculate the approximate ratio of connected EGs to connected DAGs, of connected EGs to EGs, and of connected DAGs to DAGs. These new ratios are interesting because, as we will see, they suggest that some conjectures that appear in the literature do not hold. Furthermore, we prove that the latter ratio is asymptotically 1.Finally, we calculate the approximate ratio of EGs to largest chain graphs for up to 25 nodes. Our ratios suggest that Lauritzen-Wermuth-Frydenberg chain graphs are considerably more expressive than DAGs. We also report similar approximate ratios and conclusions for multivariate regression chain graphs.
  •  
8.
  • Tillander, Annika, 1971- (författare)
  • Effect of Data Discretization on the Classification Accuracy in a High-Dimensional Framework
  • 2012
  • Ingår i: International Journal of Intelligent Systems. - : Hindawi Limited. - 0884-8173 .- 1098-111X. ; 27:4, s. 355-374
  • Tidskriftsartikel (refereegranskat)abstract
    • We investigate discretization of continuous variables for classification problems in a high-dimensional framework. As the goal of classification is to correctly predict a class membership of an observation, we suggest a discretization method that optimizes the discretization procedure using the misclassification probability as a measure of the classification accuracy. Our method is compared to several other discretization methods as well as result for continuous data. To compare performance we consider three supervised classification methods, and to capture the effect of high dimensionality we investigate a number of feature variables for a fixed number of observations. Since discretization is a data transformation procedure, we also investigate how the dependence structure is affected by this. Our method performs well, and lower misclassification can be obtained in a high-dimensional framework for both simulated and real data if the continuous feature variables are first discretized. The dependence structure is well maintained for some discretization methods.
  •  
9.
  • Torra, Vicenç, et al. (författare)
  • Synthetic generation of spatial graphs
  • 2018
  • Ingår i: International Journal of Intelligent Systems. - : John Wiley & Sons. - 0884-8173 .- 1098-111X. ; 33:12, s. 2364-2378
  • Tidskriftsartikel (refereegranskat)abstract
    • Graphs can be used to model many different types of interaction networks, for example, online social networks or animal transport networks. Several algorithms have thus been introduced to build graphs according to some predefined conditions. In this paper, we present an algorithm that generates spatial graphs with a given degree sequence. In spatial graphs, nodes are located in a space equiped with a metric. Our goal is to define a graph in such a way that the nodes and edges are positioned according to an underlying metric. More particularly, we have constructed a greedy algorithm that generates nodes proportional to an underlying probability distribution from the spatial structure, and then generates edges inversely proportional to the Euclidean distance between nodes. The algorithm first generates a graph that can be a multigraph, and then corrects multiedges. Our motivation is in data privacy for social networks, where a key problem is the ability to build synthetic graphs. These graphs need to satisfy a set of required properties (e.g., the degrees of the nodes) but also be realistic, and thus, nodes (individuals) should be located according to a spatial structure and connections should be added taking into account nearness.
  •  
10.
  • Vachkov, Gancho, et al. (författare)
  • Detection of Deviation in Performance of Battery Cells by Data Compression and Similarity Analysis
  • 2014
  • Ingår i: International Journal of Intelligent Systems. - Hoboken, NJ : John Wiley & Sons. - 0884-8173 .- 1098-111X. ; 29:3, s. 207-222
  • Tidskriftsartikel (refereegranskat)abstract
    • The battery cells are an important part of electric and hybrid vehicles, and their deterioration due to aging or malfunction directly affects the life cycle and performance of the whole battery system. Therefore, an early detection of deviation in performance of the battery cells is an important task and its correct solution could significantly improve the whole vehicle performance. This paper presents a computational strategy for the detection of deviation of battery cells, due to aging or malfunction. The detection is based on periodically processing a predetermined number of data collected in data blocks that are obtained during the real operation of the vehicle. The first step is data compression, when the original large amount of data is reduced to smaller number of cluster centers. This is done by a newly proposed sequential clustering algorithm that arranges the clusters in decreasing order of their volumes. The next step is using a fuzzy inference procedure for weighted approximation of the cluster centers to create one-dimensional models for each battery cell that represents the voltage–current relationship. This creates an equal basis for the further comparison of the battery cells. Finally, the detection of the deviated battery cells is treated as a similarity-analysis problem, in which the pair distances between all battery cells are estimated by analyzing the estimations for voltage from the respective fuzzy models. All these three steps of the computational procedure are explained in the paper and applied to real experimental data for the detection of deviation of five battery cells. Discussions and suggestions are made for a practical application aimed at designing a monitoring system for the detection of deviations. © 2013 Wiley Periodicals, Inc.
  •  
11.
  • Xiong, Ning, et al. (författare)
  • Agent negotiation of target distribution enhancing system survivability
  • 2007
  • Ingår i: International Journal of Intelligent Systems. - : Hindawi Limited. - 0884-8173 .- 1098-111X. ; 22:12, s. 1251-1269
  • Tidskriftsartikel (refereegranskat)abstract
    • This article proposes an agent negotiation model for target distribution across a set of geographically dispersed sensors. The key idea is to consider sensors as autonomous agents that negotiate over the division of tasks among them for obtaining better payoffs. The negotiation strategies for agents are established based upon the concept of subgame perfect equilibrium from game theory. Using such negotiation leads to not only superior measuring performance from a global perspective but also possibly balanced allocations of tasks to sensors, benefiting system robustness and survivability. A simulation test and results are given to demonstrate the ability of our approach in improving system security while keeping overall measuring performance near optimal.
  •  
12.
  •  
13.
  • Zhang, Yingfeng, et al. (författare)
  • Game theory based real-time shop floor scheduling strategy and method for cloud manufacturing
  • 2017
  • Ingår i: International Journal of Intelligent Systems. - : John Wiley & Sons. - 0884-8173 .- 1098-111X. ; 32:4, s. 437-463
  • Tidskriftsartikel (refereegranskat)abstract
    • With the rapid advancement and widespread application of information and sensor technologies in manufacturing shop floor, the typical challenges that cloud manufacturing is facing are the lack of real‐time, accurate, and value‐added manufacturing information, the efficient shop floor scheduling strategy, and the method based on the real‐time data. To achieve the real‐time data‐driven optimization decision, a dynamic optimization model for flexible job shop scheduling based on game theory is put forward to provide a new real‐time scheduling strategy and method. Contrast to the traditional scheduling strategy, each machine is an active entity that will request the processing tasks. Then, the processing tasks will be assigned to the optimal machines according to their real‐time status by using game theory. The key technologies such as game theory mathematical model construction, Nash equilibrium solution, and optimization strategy for process tasks are designed and developed to implement the dynamic optimization model. A case study is presented to demonstrate the efficiency of the proposed strategy and method, and real‐time scheduling for four kinds of exceptions is also discussed.
  •  
14.
  • Driankov, Dimiter, 1952- (författare)
  • Inference with consistent probabilities in expert systems
  • 1989
  • Ingår i: International journal of intelligent systems. - : John Wiley & Sons. - 0884-8173. ; 4:1, s. 1-21
  • Tidskriftsartikel (refereegranskat)abstract
    • The objective of the present article is twofold: first, to provide ways for eliciting consistent a priori and conditional probabilities for a set of events representing pieces of evidence and hypotheses in the context of a rule based expert system. Then an algorithm is proposed which uses the least possible number of a prior and conditional probabilities as its input and which computes the lower and upper bounds for higher order conditional and joint probabilities, so that these be consistent with the input probabilities provided. In the case, when inconsistent lower and upper bounds are obtained, it is suggested how the latter can be turned into consistent ones, by changing the values of only these input probabilities which are directly represented in the higher order probability under consideration. Secondly, a number of typical cases with respect to the problems of aggregation and propagation of uncertainty in expert systems is considered. It is shown how these can be treated by using higher order joint probabilities. For this purpose no global assumptions for independence of evidence and for mutual exclu-siveness of hypotheses are required, since the presence of independent and/or dependent pieces of evidence, as well as the presence of mutually exclusive hypotheses, is explicitly encoded in the input probabilities and thus, such a presence is automatically detected by the algorithm when computing higher order joint probabilities.
  •  
15.
  • Helmersson-Karlqvist, Johanna, et al. (författare)
  • The Roche Immunoturbidimetric Albumin Method on Cobas c 501 Gives Higher Values Than the Abbott and Roche BCP Methods When Analyzing Patient Plasma Samples
  • 2016
  • Ingår i: Journal of clinical laboratory analysis (Print). - : Wiley. - 0887-8013 .- 1098-2825. ; 30:5, s. 677-681
  • Tidskriftsartikel (refereegranskat)abstract
    • BACKGROUND: Serum/plasma albumin is an important and widely used laboratory marker and it is important that we measure albumin correctly without bias. We had indications that the immunoturbidimetric method on Cobas c 501 and the bromocresol purple (BCP) method on Architect 16000 differed, so we decided to study these methods more closely.METHOD: A total of 1,951 patient requests with albumin measured with both the Architect BCP and Cobas immunoturbidimetric methods were extracted from the laboratory system. A comparison with fresh plasma samples was also performed that included immunoturbidimetric and BCP methods on Cobas c 501 and analysis of the international protein calibrator ERM-DA470k/IFCC.RESULTS: The median difference between the Abbott BCP and Roche immunoturbidimetric methods was 3.3 g/l and the Roche method overestimated ERM-DA470k/IFCC by 2.2 g/l. The Roche immunoturbidimetric method gave higher values than the Roche BCP method: y = 1.111x - 0.739, R² = 0.971.CONCLUSION: The Roche immunoturbidimetric albumin method gives clearly higher values than the Abbott and Roche BCP methods when analyzing fresh patient samples. The differences between the two methods were similar at normal and low albumin levels.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-15 av 15
Typ av publikation
tidskriftsartikel (15)
Typ av innehåll
refereegranskat (13)
övrigt vetenskapligt/konstnärligt (2)
Författare/redaktör
Driankov, Dimiter, 1 ... (3)
Xiong, Ning (2)
Christensen, Henrik (1)
Larsson, Anders (1)
Wang, Jin (1)
Liu, Sichao (1)
visa fler...
Svensson, Magnus (1)
Torra, Vicenç (1)
Atif, Yacine, 1967- (1)
Wang, Lipo (1)
Svensson, Per (1)
Pavlenko, Tatjana (1)
Al Falahi, Kanna (1)
Abraham, Ajith (1)
Peña, Jose M. (1)
Jonsson, Annie (1)
Havelka, Aleksandra ... (1)
Flodin, Mats (1)
Byttner, Stefan, 197 ... (1)
Vachkov, Gancho (1)
Helmersson-Karlqvist ... (1)
Li, HX (1)
Tillander, Annika, 1 ... (1)
Qian, Cheng (1)
Wang, Haiying (1)
Zhang, Yingfeng (1)
Xu, Xiao Yan (1)
Li, Yongmin (1)
Zeng, WY (1)
Salas, Julián (1)
Chernyak, Oleksandr (1)
Sonntag, Dag (1)
Gomez-Olmedo, Manuel (1)
Qadri, Syed Furqan (1)
Lin, Hongxiang (1)
Shen, Linlin (1)
Ahmad, Mubashir (1)
Qadri, Salman (1)
Khan, Salabat (1)
Khan, Maqbool (1)
Zareen, Syeda Shamai ... (1)
Akbar, Muhammad Azee ... (1)
Bin Heyat, Md Belal (1)
Qamar, Saqib (1)
Navarro‐Arribas, Gui ... (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (3)
Örebro universitet (3)
Stockholms universitet (2)
Mälardalens universitet (2)
Högskolan i Skövde (2)
Umeå universitet (1)
visa fler...
Uppsala universitet (1)
Högskolan i Halmstad (1)
Linköpings universitet (1)
Karolinska Institutet (1)
visa färre...
Språk
Engelska (15)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (7)
Teknik (2)
Medicin och hälsovetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy