SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "L773:2162 237X "

Sökning: L773:2162 237X

  • Resultat 1-28 av 28
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Agback, Tatiana (författare)
  • A Sparse Model-Inspired Deep Thresholding Network for Exponential Signal Reconstruction--Application in Fast Biological Spectroscopy
  • 2023
  • Ingår i: IEEE transactions on neural networks and learning systems. - 2162-237X .- 2162-2388. ; 34, s. 7578-7592
  • Tidskriftsartikel (refereegranskat)abstract
    • The nonuniform sampling (NUS) is a powerful approach to enable fast acquisition but requires sophisticated reconstruction algorithms. Faithful reconstruction from partially sampled exponentials is highly expected in general signal processing and many applications. Deep learning (DL) has shown astonishing potential in this field, but many existing problems, such as lack of robustness and explainability, greatly limit its applications. In this work, by combining the merits of the sparse model-based optimization method and data-driven DL, we propose a DL architecture for spectra reconstruction from undersampled data, called MoDern. It follows the iterative reconstruction in solving a sparse model to build the neural network, and we elaborately design a learnable soft-thresholding to adaptively eliminate the spectrum artifacts introduced by undersampling. Extensive results on both synthetic and biological data show that MoDern enables more robust, high-fidelity, and ultrafast reconstruction than the state-of-the-art methods. Remarkably, MoDern has a small number of network parameters and is trained on solely synthetic data while generalizing well to biological data in various scenarios. Furthermore, we extend it to an open-access and easy-to-use cloud computing platform (XCloud-MoDern), contributing a promising strategy for further development of biological applications.
  •  
2.
  • Akin, Erdal (författare)
  • Deep Reinforcement Learning-Based Multirestricted Dynamic-Request Transportation Framework
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; , s. 1-11
  • Tidskriftsartikel (refereegranskat)abstract
    • Unmanned aerial vehicles (UAVs) are used in many areas where their usage is increasing constantly. Their popularity, therefore, maintains its importance in the technology world. Parallel to the development of technology, human standards, and surroundings should also improve equally. This study is developed based on the possibility of timely delivery of urgent medical requests in emergency situations. Using UAVs for delivering urgent medical requests will be very effective due to their flexible maneuverability and low costs. However, off-the-shelf UAVs suffer from limited payload capacity and battery constraints. In addition, urgent requests may be requested at an uncertain time, and delivering in a short time may be crucial. To address this issue, we proposed a novel framework that considers the limitations of the UAVs and dynamically requested packages. These previously unknown packages have source–destination pairs and delivery time intervals. Furthermore, we utilize deep reinforcement learning (DRL) algorithms, deep Q-network (DQN), proximal policy optimization (PPO), and advantage actor–critic (A2C) to overcome this unknown environment and requests. The comprehensive experimental results demonstrate that the PPO algorithm has a faster and more stable training performance than the other DRL algorithms in two different environmental setups. Also, we implemented an extension version of a Brute-force (BF) algorithm, assuming that all requests and environments are known in advance. The PPO algorithm performs very close to the success rate of the BF algorithm.
  •  
3.
  • Bjurgert, Johan, et al. (författare)
  • On Adaptive Boosting for System Identification
  • 2018
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; 29:9, s. 4510-4514
  • Tidskriftsartikel (refereegranskat)abstract
    • In the field of machine learning, the algorithm Adaptive Boosting has been successfully applied to a wide range of regression and classification problems. However, to the best of the authors' knowledge, the use of this algorithm to estimate dynamical systems has not been exploited. In this brief, we explore the connection between Adaptive Boosting and system identification, and give examples of an identification method that makes use of this connection. We prove that the resulting estimate converges to the true underlying system for an output-error model structure under reasonable assumptions in the large sample limit and derive a bound of the model mismatch for the noise-free case.
  •  
4.
  • Dinh, Canh T., et al. (författare)
  • A New Look and Convergence Rate of Federated Multitask Learning With Laplacian Regularization
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2162-237X .- 2162-2388.
  • Tidskriftsartikel (refereegranskat)abstract
    • Non-independent and identically distributed (non-IID) data distribution among clients is considered as the key factor that degrades the performance of federated learning (FL). Several approaches to handle non-IID data, such as personalized FL and federated multitask learning (FMTL), are of great interest to research communities. In this work, first, we formulate the FMTL problem using Laplacian regularization to explicitly leverage the relationships among the models of clients for multitask learning. Then, we introduce a new view of the FMTL problem, which, for the first time, shows that the formulated FMTL problem can be used for conventional FL and personalized FL. We also propose two algorithms FedU and decentralized FedU (dFedU) to solve the formulated FMTL problem in communication-centralized and decentralized schemes, respectively. Theoretically, we prove that the convergence rates of both algorithms achieve linear speedup for strongly convex and sublinear speedup of order 1/2 for nonconvex objectives. Experimentally, we show that our algorithms outperform the conventional algorithm FedAvg, FedProx, SCAFFOLD, and AFL in FL settings, MOCHA in FMTL settings, as well as pFedMe and Per-FedAvg in personalized FL settings.
  •  
5.
  • Frady, Edward Paxon, et al. (författare)
  • Variable Binding for Sparse Distributed Representations : Theory and Applications
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers Inc.. - 2162-237X .- 2162-2388. ; 34:5, s. 2191-2204
  • Tidskriftsartikel (refereegranskat)abstract
    • Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks. 
  •  
6.
  • Ghareh Baghi, Arash, et al. (författare)
  • A Deep Machine Learning Method for Classifying Cyclic Time Series of Biological Signals Using Time-Growing Neural Network
  • 2018
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers Inc.. - 2162-237X .- 2162-2388. ; 29:9, s. 4102-4115
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper presents a novel method for learning the cyclic contents of stochastic time series: the deep time-growing neural network (DTGNN). The DTGNN combines supervised and unsupervised methods in different levels of learning for an enhanced performance. It is employed by a multiscale learning structure to classify cyclic time series (CTS), in which the dynamic contents of the time series are preserved in an efficient manner. This paper suggests a systematic procedure for finding the design parameter of the classification method for a one-versus-multiple class application. A novel validation method is also suggested for evaluating the structural risk, both in a quantitative and a qualitative manner. The effect of the DTGNN on the performance of the classifier is statistically validated through the repeated random subsampling using different sets of CTS, from different medical applications. The validation involves four medical databases, comprised of 108 recordings of the electroencephalogram signal, 90 recordings of the electromyogram signal, 130 recordings of the heart sound signal, and 50 recordings of the respiratory sound signal. Results of the statistical validations show that the DTGNN significantly improves the performance of the classification and also exhibits an optimal structural risk. 
  •  
7.
  • Huang, Mengyu, et al. (författare)
  • Learning-Based DoS Attack Power Allocation in Multiprocess Systems
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; 34:10, s. 8017-8030
  • Tidskriftsartikel (refereegranskat)abstract
    • We study the denial-of-service (DoS) attack power allocation optimization in a multiprocess cyber–physical system (CPS), where sensors observe different dynamic processes and send the local estimated states to a remote estimator through wireless channels, while a DoS attacker allocates its attack power on different channels as interference to reduce the wireless transmission rates, and thus degrading the estimation accuracy of the remote estimator. We consider two attack optimization problems. One is to maximize the average estimation error of different processes, and the other is to maximize the minimal one. We formulate these problems as Markov decision processes (MDPs). Unlike the majority of existing works where the attacker is assumed to have complete knowledge of the CPS, we consider an attacker with no prior knowledge of the wireless channel model and the sensor information. To address this uncertainty issue and the curse of dimensionality, we provide a learning-based attack power allocation algorithm stemming from the double deep Q-network (DDQN) method. First, with a defined partial order, the maximal elements of the action space are determined. By investigating the characteristic of the MDP, we prove that the optimal attack allocations of both problems belong to the set of these elements. This property reduces the entire action space to a smaller subset and speeds up the learning algorithm. In addition, to further improve the data efficiency and learning performance, we propose two enhanced attack power allocation algorithms which add two auxiliary tasks of MDP transition estimation inspired by model-based reinforcement learning, i.e., the next state prediction and the current action estimation. Experimental results demonstrate the versatility and efficiency of the proposed algorithms in different system settings compared with other algorithms, such as the conventional value iteration, double Q-learning, and deep Q-network.
  •  
8.
  • Kleyko, Denis, et al. (författare)
  • Cellular Automata Can Reduce Memory Requirements of Collective-State Computing
  • 2022
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers Inc.. - 2162-237X .- 2162-2388. ; 33:6, s. 2701-2713
  • Tidskriftsartikel (refereegranskat)abstract
    • Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns--rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory. 
  •  
9.
  • Kleyko, Denis, 1990-, et al. (författare)
  • Classification and Recall With Binary Hyperdimensional Computing : Tradeoffs in Choice of Density and Mapping Characteristics
  • 2018
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE. - 2162-237X .- 2162-2388. ; 29:12, s. 5880-5898
  • Tidskriftsartikel (refereegranskat)abstract
    • Hyperdimensional (HD) computing is a promising paradigm for future intelligent electronic appliances operating at low power. This paper discusses tradeoffs of selecting parameters of binary HD representations when applied to pattern recognition tasks. Particular design choices include density of representations and strategies for mapping data from the original representation. It is demonstrated that for the considered pattern recognition tasks (using synthetic and real-world data) both sparse and dense representations behave nearly identically. This paper also discusses implementation peculiarities which may favor one type of representations over the other. Finally, the capacity of representations of various densities is discussed.
  •  
10.
  • Kleyko, Denis, et al. (författare)
  • Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks
  • 2021
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers Inc.. - 2162-237X .- 2162-2388. ; 32:8, s. 3777-3783
  • Tidskriftsartikel (refereegranskat)abstract
    • The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this brief, we focus on resource-efficient randomly connected neural networks known as random vector functional link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world data sets from the UCI machine learning repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small ${n}$ -bits integers, which results in a computationally efficient architecture. Finally, through hardware field-programmable gate array (FPGA) implementations, we show that such an approach consumes approximately 11 times less energy than that of the conventional RVFL.
  •  
11.
  • Kleyko, Denis, et al. (författare)
  • Generalized Key-Value Memory to Flexibly Adjust Redundancy in Memory-Augmented Networks
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; 34:12, s. 10993-10998
  • Tidskriftsartikel (refereegranskat)abstract
    • Memory-augmented neural networks enhance a neural network with an external key-value (KV) memory whose complexity is typically dominated by the number of support vectors in the key memory. We propose a generalized KV memory that decouples its dimension from the number of support vectors by introducing a free parameter that can arbitrarily add or remove redundancy to the key memory representation. In effect, it provides an additional degree of freedom to flexibly control the tradeoff between robustness and the resources required to store and compute the generalized KV memory. This is particularly useful for realizing the key memory on in-memory computing hardware where it exploits nonideal, but extremely efficient nonvolatile memory devices for dense storage and computation. Experimental results show that adapting this parameter on demand effectively mitigates up to 44% nonidealities, at equal accuracy and number of devices, without any need for neural network retraining.
  •  
12.
  • Kleyko, Denis, et al. (författare)
  • Holographic Graph Neuron: a Bio-Inspired Architecture for Pattern Processing
  • 2017
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE. - 2162-237X .- 2162-2388. ; 28:6, s. 1250-1262
  • Tidskriftsartikel (refereegranskat)abstract
    • This article proposes the use of Vector Symbolic Architectures for implementing Hierarchical Graph Neuron, an architecture for memorizing patterns of generic sensor stimuli. The adoption of a Vector Symbolic representation ensures a one-layered design for the approach, while maintaining the previously reported properties and performance characteristics of Hierarchical Graph Neuron, and also improving the noise resistance of the architecture. The proposed architecture enables a linear (with respect to the number of stored entries) time search for an arbitrary sub-pattern.
  •  
13.
  • Kleyko, Denis, et al. (författare)
  • Integer Echo State Networks : Efficient Reservoir Computing for Digital Hardware
  • 2022
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers Inc.. - 2162-237X .- 2162-2388. ; 33:4, s. 1688-1701
  • Tidskriftsartikel (refereegranskat)abstract
    • We propose an approximation of echo state networks (ESNs) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed integer ESN (intESN) is a vector containing only n-bits integers (where n< 8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The proposed intESN approach is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs, classifying time series, and learning dynamic processes. Such architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss. The experiments on a field-programmable gate array confirm that the proposed intESN approach is much more energy efficient than the conventional ESN. 
  •  
14.
  • Kleyko, Denis, et al. (författare)
  • Perceptron Theory Can Predict the Accuracy of Neural Networks
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - 2162-237X .- 2162-2388.
  • Tidskriftsartikel (refereegranskat)abstract
    • Multilayer neural networks set the current state of the art for many technical classification problems. But, these networks are still, essentially, black boxes in terms of analyzing them and predicting their performance. Here, we develop a statistical theory for the one-layer perceptron and show that it can predict performances of a surprisingly large variety of neural networks with different architectures. A general theory of classification with perceptrons is developed by generalizing an existing theory for analyzing reservoir computing models and connectionist models for symbolic reasoning known as vector symbolic architectures. Our statistical theory offers three formulas leveraging the signal statistics with increasing detail. The formulas are analytically intractable, but can be evaluated numerically. The description level that captures maximum details requires stochastic sampling methods. Depending on the network model, the simpler formulas already yield high prediction accuracy. The quality of the theory predictions is assessed in three experimental settings, a memorization task for echo state networks (ESNs) from reservoir computing literature, a collection of classification datasets for shallow randomly connected networks, and the ImageNet dataset for deep convolutional neural networks. We find that the second description level of the perceptron theory can predict the performance of types of ESNs, which could not be described previously. Furthermore, the theory can predict deep multilayer neural networks by being applied to their output layer. While other methods for prediction of neural networks performance commonly require to train an estimator model, the proposed theory requires only the first two moments of the distribution of the postsynaptic sums in the output neurons. Moreover, the perceptron theory compares favorably to other methods that do not rely on training an estimator model.
  •  
15.
  • Liao, Yicheng, et al. (författare)
  • Neural Network Design for Impedance Modeling of Power Electronic Systems Based on Latent Features
  • 2024
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; 35:5, s. 5968-5980
  • Tidskriftsartikel (refereegranskat)abstract
    • Data-driven approaches are promising to address the modeling issues of modern power electronics-based power systems, due to the black-box feature. Frequency-domain analysis has been applied to address the emerging small-signal oscillation issues caused by converter control interactions. However, the frequency-domain model of a power electronic system is linearized around a specific operating condition. It thus requires measurement or identification of frequency-domain models repeatedly at many operating points (OPs) due to the wide operation range of the power systems, which brings significant computation and data burden. This article addresses this challenge by developing a deep learning approach using multilayer feedforward neural networks (FNNs) to train the frequency-domain impedance model of power electronic systems that is continuous of OP. Distinguished from the prior neural network designs relying on trial-and-error and sufficient data size, this article proposes to design the FNN based on latent features of power electronic systems, i.e., the number of system poles and zeros. To further investigate the impacts of data quantity and quality, learning procedures from a small dataset are developed, and K-medoids clustering based on dynamic time warping is used to reveal insights into multivariable sensitivity, which helps improve the data quality. The proposed approaches for the FNN design and learning have been proven simple, effective, and optimal based on case studies on a power electronic converter, and future prospects in its industrial applications are also discussed.
  •  
16.
  • Ma, Zhanyu, et al. (författare)
  • Decorrelation of Neutral Vector Variables : Theory and Applications
  • 2018
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2162-237X .- 2162-2388. ; 29:1, s. 129-143
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we propose novel strategies for neutral vector variable decorrelation. Two fundamental invertible transformations, namely, serial nonlinear transformation and parallel nonlinear transformation, are proposed to carry out the decorrelation. For a neutral vector variable, which is not multivariate-Gaussian distributed, the conventional principal component analysis cannot yield mutually independent scalar variables. With the two proposed transformations, a highly negatively correlated neutral vector can be transformed to a set of mutually independent scalar variables with the same degrees of freedom. We also evaluate the decorrelation performances for the vectors generated from a single Dirichlet distribution and a mixture of Dirichlet distributions. The mutual independence is verified with the distance correlation measurement. The advantages of the proposed decorrelation strategies are intensively studied and demonstrated with synthesized data and practical application evaluations.
  •  
17.
  • Ma, Zhanyu, et al. (författare)
  • Insights Into Multiple/Single Lower Bound Approximation for Extended Variational Inference in Non-Gaussian Structured Data Modeling
  • 2020
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - 2162-237X .- 2162-2388. ; 31:7, s. 2240-2254
  • Tidskriftsartikel (refereegranskat)abstract
    • For most of the non-Gaussian statistical models, the data being modeled represent strongly structured properties, such as scalar data with bounded support (e.g., beta distribution), vector data with unit length (e.g., Dirichlet distribution), and vector data with positive elements (e.g., generalized inverted Dirichlet distribution). In practical implementations of non-Gaussian statistical models, it is infeasible to find an analytically tractable solution to estimating the posterior distributions of the parameters. Variational inference (VI) is a widely used framework in Bayesian estimation. Recently, an improved framework, namely, the extended VI (EVI), has been introduced and applied successfully to a number of non-Gaussian statistical models. EVI derives analytically tractable solutions by introducing lower bound approximations to the variational objective function. In this paper, we compare two approximation strategies, namely, the multiple lower bounds (MLBs) approximation and the single lower bound (SLB) approximation, which can be applied to carry out the EVI. For implementation, two different conditions, the weak and the strong conditions, are discussed. Convergence of the EVI depends on the selection of the lower bound, regardless of the choice of weak or strong condition. We also discuss the convergence properties to clarify the differences between MLB and SLB. Extensive comparisons are made based on some EVI-based non-Gaussian statistical models. Theoretical analysis is conducted to demonstrate the differences between the weak and strong conditions. Experimental results based on real data show advantages of the SLB approximation over the MLB approximation.
  •  
18.
  • Modares, Amir, et al. (författare)
  • Safe Reinforcement Learning via a Model-Free Safety Certifier
  • 2023
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2162-237X .- 2162-2388.
  • Tidskriftsartikel (refereegranskat)abstract
    • This article presents a data-driven safe reinforcement learning (RL) algorithm for discrete-time nonlinear systems. A data-driven safety certifier is designed to intervene with the actions of the RL agent to ensure both safety and stability of its actions. This is in sharp contrast to existing model-based safety certifiers that can result in convergence to an undesired equilibrium point or conservative interventions that jeopardize the performance of the RL agent. To this end, the proposed method directly learns a robust safety certifier while completely bypassing the identification of the system model. The nonlinear system is modeled using linear parameter varying (LPV) systems with polytopic disturbances. To prevent the requirement for learning an explicit model of the LPV system, data-based $\lambda$ -contractivity conditions are first provided for the closed-loop system to enforce robust invariance of a prespecified polyhedral safe set and the systems asymptotic stability. These conditions are then leveraged to directly learn a robust data-based gain-scheduling controller by solving a convex program. A significant advantage of the proposed direct safe learning over model-based certifiers is that it completely resolves conflicts between safety and stability requirements while assuring convergence to the desired equilibrium point. Data-based safety certification conditions are then provided using Minkowski functions. They are then used to seemingly integrate the learned backup safe gain-scheduling controller with the RL controller. Finally, we provide a simulation example to verify the effectiveness of the proposed approach.
  •  
19.
  • Naseer, Muzammal, et al. (författare)
  • Guidance Through Surrogate: Toward a Generic Diagnostic Attack
  • 2022
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. - 2162-237X .- 2162-2388.
  • Tidskriftsartikel (refereegranskat)abstract
    • Adversarial training (AT) is an effective approach to making deep neural networks robust against adversarial attacks. Recently, different AT defenses are proposed that not only maintain a high clean accuracy but also show significant robustness against popular and well-studied adversarial attacks, such as projected gradient descent (PGD). High adversarial robustness can also arise if an attack fails to find adversarial gradient directions, a phenomenon known as "gradient masking." In this work, we analyze the effect of label smoothing on AT as one of the potential causes of gradient masking. We then develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed guided projected gradient attack (G-PGA). Our attack approach is based on a "match and deceive" loss that finds optimal adversarial directions through guidance from a surrogate model. Our modified attack does not require random restarts a large number of attack iterations or a search for optimal step size. Furthermore, our proposed G-PGA is generic, thus it can be combined with an ensemble attack strategy as we demonstrate in the case of auto-attack, leading to efficiency and convergence speed improvements. More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
  •  
20.
  •  
21.
  • Tsantekidis, Avraam, et al. (författare)
  • Price Trailing for Financial Trading Using Deep Reinforcement Learning
  • 2021
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; 32:7, s. 2837-2846
  • Tidskriftsartikel (refereegranskat)abstract
    • Machine learning methods have recently seen a growing number of applications in financial trading. Being able to automatically extract patterns from past price data and consistently apply them in the future has been the focus of many quantitative trading applications. However, developing machine learning-based methods for financial trading is not straightforward, requiring carefully designed targets/rewards, hyperparameter fine-tuning, and so on. Furthermore, most of the existing methods are unable to effectively exploit the information available across various financial instruments. In this article, we propose a deep reinforcement learning-based approach, which ensures that consistent rewards are provided to the trading agent, mitigating the noisy nature of profit-and-loss rewards that are usually used. To this end, we employ a novel price trailing-based reward shaping approach, significantly improving the performance of the agent in terms of profit, Sharpe ratio, and maximum drawdown. Furthermore, we carefully designed a data preprocessing method that allows for training the agent on different FOREX currency pairs, providing a way for developing market-wide RL agents and allowing, at the same time, to exploit more powerful recurrent deep learning models without the risk of overfitting. The ability of the proposed methods to improve various performance metrics is demonstrated using a challenging large-scale data set, containing 28 instruments, provided by Speedlab AG.
  •  
22.
  • Wang, B., et al. (författare)
  • Semiglobal Suboptimal Output Regulation for Heterogeneous Multi-Agent Systems With Input Saturation via Adaptive Dynamic Programming
  • 2022
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; , s. 1-9
  • Tidskriftsartikel (refereegranskat)abstract
    • This article considers the semiglobal cooperative suboptimal output regulation problem of heterogeneous multi-agent systems with unknown agent dynamics in the presence of input saturation. To solve the problem, we develop distributed suboptimal control strategies from two perspectives, namely, model-based and data-driven. For the model-based case, we design a suboptimal control strategy by using the low-gain technique and output regulation theory. Moreover, when the agents’ dynamics are unknown, we design a data-driven algorithm to solve the problem. We show that proposed control strategies ensure each agent’s output gradually follows the reference signal and achieves interference suppression while guaranteeing closed-loop stability. The theoretical results are illustrated by a numerical simulation example. 
  •  
23.
  • Wang, Z., et al. (författare)
  • A Sparse Model-Inspired Deep Thresholding Network for Exponential Signal Reconstruction--Application in Fast Biological Spectroscopy
  • 2022
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : Institute of Electrical and Electronics Engineers (IEEE). - 2162-237X .- 2162-2388. ; 34:10, s. 7578-92
  • Tidskriftsartikel (refereegranskat)abstract
    • The nonuniform sampling (NUS) is a powerful approach to enable fast acquisition but requires sophisticated reconstruction algorithms. Faithful reconstruction from partially sampled exponentials is highly expected in general signal processing and many applications. Deep learning (DL) has shown astonishing potential in this field, but many existing problems, such as lack of robustness and explainability, greatly limit its applications. In this work, by combining the merits of the sparse model-based optimization method and data-driven DL, we propose a DL architecture for spectra reconstruction from undersampled data, called MoDern. It follows the iterative reconstruction in solving a sparse model to build the neural network, and we elaborately design a learnable soft-thresholding to adaptively eliminate the spectrum artifacts introduced by undersampling. Extensive results on both synthetic and biological data show that MoDern enables more robust, high-fidelity, and ultrafast reconstruction than the state-of-the-art methods. Remarkably, MoDern has a small number of network parameters and is trained on solely synthetic data while generalizing well to biological data in various scenarios. Furthermore, we extend it to an open-access and easy-to-use cloud computing platform (XCloud-MoDern), contributing a promising strategy for further development of biological applications.
  •  
24.
  • Yu, Yinan, 1985, et al. (författare)
  • CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification
  • 2018
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - 2162-237X .- 2162-2388. ; 29:2, s. 440 -456
  • Tidskriftsartikel (refereegranskat)abstract
    • In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
  •  
25.
  • Zheng, Ren, et al. (författare)
  • Stability of Analytic Neural Networks With Event-Triggered Synaptic Feedbacks
  • 2016
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - : IEEE. - 2162-237X .- 2162-2388. ; 27:2, s. 483-494
  • Tidskriftsartikel (refereegranskat)abstract
    • In this paper, we investigate stability of a class of analytic neural networks with the synaptic feedback via event-triggered rules. This model is general and include Hopfield neural network as a special case. These event-trigger rules can efficiently reduces loads of computation and information transmission at synapses of the neurons. The synaptic feedback of each neuron keeps a constant value based on the outputs of the other neurons at its latest triggering time but changes at its next triggering time, which is determined by a certain criterion. It is proved that every trajectory of the analytic neural network converges to certain equilibrium under this event-triggered rule for all the initial values except a set of zero measure. The main technique of the proof is the Lojasiewicz inequality to prove the finiteness of trajectory length. The realization of this event-triggered rule is verified by the exclusion of Zeno behaviors. Numerical examples are provided to illustrate the efficiency of the theoretical results.
  •  
26.
  • Huang, Y., et al. (författare)
  • Exponential Signal Reconstruction With Deep Hankel Matrix Factorization
  • 2022
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - 2162-237X. ; 34:9, s. 6214-26
  • Tidskriftsartikel (refereegranskat)abstract
    • Exponential function is a basic form of temporal signals, and how to fast acquire this signal is one of the fundamental problems and frontiers in signal processing. To achieve this goal, partial data may be acquired but result in severe artifacts in its spectrum, which is the Fourier transform of exponentials. Thus, reliable spectrum reconstruction is highly expected in the fast data acquisition in many applications, such as chemistry, biology, and medical imaging. In this work, we propose a deep learning method whose neural network structure is designed by imitating the iterative process in the model-based state-of-the-art exponentials' reconstruction method with the low-rank Hankel matrix factorization. With the experiments on synthetic data and realistic biological magnetic resonance signals, we demonstrate that the new method yields much lower reconstruction errors and preserves the low-intensity signals much better than compared methods. IEEE
  •  
27.
  • Li, Yuling, et al. (författare)
  • Distributed Neural-Network-Based Cooperation Control for Teleoperation of Multiple Mobile Manipulators Under Round-Robin Protocol
  • 2021
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - 2162-237X.
  • Tidskriftsartikel (refereegranskat)abstract
    • This article addresses the distributed cooperative control design for a class of sampled-data teleoperation systems with multiple slave mobile manipulators grasping an object in the presence of communication bandwidth limitation and time delays. Discrete-time information transmission with time-varying delays is assumed, and the Round-Robin (RR) scheduling protocol is used to regulate the data transmission from the multiple slaves to the master. The control task is to guarantee the task-space position synchronization between the master and the grasped object with the mobile bases in a fixed formation. A fully distributed control strategy including neural-network-based task-space synchronization controllers and neural-network-based null-space formation controllers is proposed, where the radial basis function (RBF) neural networks with adaptive estimation of approximation errors are used to compensate the dynamical uncertainties. The stability and the synchronization/formation features of the single-master-multiple-slaves (SMMS) teleoperation system are analyzed, and the relationship among the control parameters, the upper bound of the time delays, and the maximum allowable sampling interval is established. Experiments are implemented to validate the effectiveness of the proposed control algorithm.
  •  
28.
  • Varagnolo, Damiano, et al. (författare)
  • Finding Potential Support Vectors in Separable Classification Problems
  • 2013
  • Ingår i: IEEE Transactions on Neural Networks and Learning Systems. - 2162-237X. ; 24:11, s. 1799-1813
  • Tidskriftsartikel (refereegranskat)abstract
    • This paper considers the classification problem using support vector (SV) machines and investigates how to maximally reduce the size of the training set without losing information. Under separable data set assumptions, we derive the exact conditions stating which observations can be discarded without diminishing the overall information content. For this purpose, we introduce the concept of potential SVs, i.e., those data that can become SVs when future data become available. To complement this, we also characterize the set of discardable vectors (DVs), i.e., those data that, given the current data set, can never become SVs. Thus, these vectors are useless for future training purposes and can eventually be removed without loss of information. Then, we provide an efficient algorithm based on linear programming that returns the potential and DVs by constructing a simplex tableau. Finally, we compare it with alternative algorithms available in the literature on some synthetic data as well as on data sets from standard repositories.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-28 av 28
Typ av publikation
tidskriftsartikel (28)
Typ av innehåll
refereegranskat (28)
Författare/redaktör
Kleyko, Denis (8)
Osipov, Evgeny (5)
Wang, Z. (2)
Orekhov, Vladislav, ... (2)
Guo, D. (2)
Huang, Y. (1)
visa fler...
Wang, J. (1)
Xu, L. (1)
Yang, T. (1)
Johansson, Rolf (1)
Lindén, Maria, 1965- (1)
Rojas, Cristian R., ... (1)
Khan, Fahad (1)
Kleyko, Denis, 1990- (1)
Wiklund, Urban (1)
Wang, B. (1)
Jia, Y (1)
Adib Yaghmaie, Farna ... (1)
Modares, Hamidreza (1)
Agback, T (1)
Agback, Tatiana (1)
Dey, Subhrakanti (1)
Akin, Erdal (1)
Zhao, J (1)
Khan, Salman (1)
Del Favero, Simone (1)
Varagnolo, Damiano (1)
Pillonetto, Gianluig ... (1)
McKelvey, Tomas, 196 ... (1)
Poor, H. Vincent (1)
Nordström, Lars, 196 ... (1)
De Silva, Daswin (1)
Alahakoon, Damminda (1)
Leijon, Arne (1)
Wang, Xiongfei (1)
Bjurgert, Johan (1)
Valenzuela, Patricio ... (1)
Schenato, Luca (1)
Taghia, Jalil (1)
Qu, X (1)
Yu, Yinan, 1985 (1)
Liu, Kun (1)
Qu, X. B. (1)
He, Wei (1)
Shi, Ling (1)
Ding, Kemi (1)
Chen, Minjie (1)
Li, Yufei (1)
Huang, Y. H. (1)
Li, Yuzhe (1)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (7)
RISE (7)
Luleå tekniska universitet (5)
Linköpings universitet (3)
Göteborgs universitet (2)
Uppsala universitet (2)
visa fler...
Umeå universitet (1)
Mälardalens universitet (1)
Lunds universitet (1)
Malmö universitet (1)
Chalmers tekniska högskola (1)
Sveriges Lantbruksuniversitet (1)
visa färre...
Språk
Engelska (28)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (18)
Teknik (12)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy