SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Liang Xinyue) "

Sökning: WFRF:(Liang Xinyue)

  • Resultat 1-18 av 18
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Beal, Jacob, et al. (författare)
  • Robust estimation of bacterial cell count from optical density
  • 2020
  • Ingår i: Communications Biology. - : Springer Science and Business Media LLC. - 2399-3642. ; 3:1
  • Tidskriftsartikel (refereegranskat)abstract
    • Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data.
  •  
2.
  • Bao, Zijia, et al. (författare)
  • A helical polypyrrole nanotube interwoven zeolitic imidazolate framework and its derivative as an oxygen electrocatalyst
  • 2022
  • Ingår i: Chemical Communications. - : Royal Society of Chemistry (RSC). - 1359-7345 .- 1364-548X. ; 58:80, s. 11288-11291
  • Tidskriftsartikel (refereegranskat)abstract
    • A helical polypyrrole nanotube interwoven zeolitic imidazolate framework (ZIF) has been prepared for the first time. After pyrolysis, the helical carbon could act as highly active sites, while the 3D-connected nanoarchitecture contributed to fast charge transfer. The derived carbon material exhibits high activity for the ORR and good performance for a Zn–air battery.
  •  
3.
  • Javid, Alireza M., et al. (författare)
  • Adaptive Learning without Forgetting via Low-Complexity Convex Networks
  • 2020
  • Ingår i: 28th European Signal Processing Conference (EUSIPCO 2020). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 1623-1627
  • Konferensbidrag (refereegranskat)abstract
    • We study the problem of learning without forgetting (LwF) in which a deep learning model learns new tasks without a significant drop in the classification performance on the previously learned tasks. We propose an LwF algorithm for multilayer feedforward neural networks in which we can adapt the number of layers of the network from the old task to the new task. To this end, we limit ourselves to convex loss functions in order to train the network in a layer-wise manner. Layer-wise convex optimization leads to low-computational complexity and provides a more interpretable understanding of the network. We compare the effectiveness of the proposed adaptive LwF algorithm with the standard LwF over image classification datasets.
  •  
4.
  • Jurado, Pol Grau, et al. (författare)
  • Deterministic transform based weight matrices for neural networks
  • 2022
  • Ingår i: 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 4528-4532
  • Konferensbidrag (refereegranskat)abstract
    • We propose to use deterministic transforms as weight matrices for several feedforward neural networks. The use of deterministic transforms helps to reduce the computational complexity in two ways: (1) matrix-vector product complexity in forward pass, helping real time complexity, and (2) fully avoiding backpropagation in the training stage. For each layer of a feedforward network, we propose two unsupervised methods to choose the most appropriate deterministic transform from a set of transforms (a bag of well-known transforms). Experimental results show that the use of deterministic transforms is as good as traditional random matrices in the sense of providing similar classification performance.
  •  
5.
  • Jurado, Pol Grau, et al. (författare)
  • Use of Deterministic Transforms to Design Weight Matrices of a Neural Network
  • 2021
  • Ingår i: 29th European Signal Processing Conference (EUSIPCO 2021). - : European Association for Signal, Speech and Image Processing (EURASIP). ; , s. 1366-1370
  • Konferensbidrag (refereegranskat)abstract
    • Self size-estimating feedforward network (SSFN) is a feedforward multilayer network. For the existing SSFN, a part of each weight matrix is trained using a layer-wise convex optimization approach (a supervised training), while the other part is chosen as a random matrix instance (an unsupervised training). In this article, the use of deterministic transforms instead of random matrix instances for the SSFN weight matrices is explored. The use of deterministic transforms provides a reduction in computational complexity. The use of several deterministic transforms is investigated, such as discrete cosine transform, Hadamard transform, Hartley transform, and wavelet transforms. The choice of a deterministic transform among a set of transforms is made in an unsupervised manner. To this end, two methods based on features' statistical parameters are developed. The proposed methods help to design a neural net where deterministic transforms can vary across its layers' weight matrices. The effectiveness of the proposed approach vis-a-vis the SSFN is illustrated for object classification tasks using several benchmark datasets.
  •  
6.
  • Liang, Xinyue, et al. (författare)
  • A Low Complexity Decentralized Neural Net with Centralized Equivalence using Layer-wise Learning
  • 2020
  • Ingår i: 2020 International joint conference on neural networks (IJCNN). - : IEEE.
  • Konferensbidrag (refereegranskat)abstract
    • We design a low complexity decentralized learning algorithm to train a recently proposed large neural network in distributed processing nodes (workers). We assume the communication network between the workers is synchronized and can be modeled as a doubly-stochastic mixing matrix without having any master node. In our setup, the training data is distributed among the workers but is not shared in the training process due to privacy and security concerns. Using altemating-direction-method-of-multipliers (ADMM) along with a layer-wise convex optimization approach, we propose a decentralized learning algorithm which enjoys low computational complexity and communication cost among the workers. We show that it is possible to achieve equivalent learning performance as if the data is available in a single place. Finally, we experimentally illustrate the time complexity and convergence behavior of the algorithm.
  •  
7.
  • Liang, Xinyue, et al. (författare)
  • Asynchronous Decentralized Learning of Randomization-based Neural Networks
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • In a communication network, decentralized learning refers to the knowledge collaboration between the different local agents (processing nodes) to improve the local estimation performance without sharing private data. The ideal case is that the decentralized solution approximates the centralized solution, as if all the data are available at a single node, and requires low computational power and communication overhead. In this work, we propose a decentralized learning of randomization-based neural networks with asynchronous communication and achieve centralized equivalent performance. We propose an ARock-based alternating-direction-method-of-multipliers (ADMM) algorithm that enables individual node activation and one-sided communication in an undirected connected network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm reduces the computational cost and communication overhead due to its asynchronous nature. We study the proposed algorithm on different randomization-based neural networks, including ELM, SSFN, RVFL, and its variants, to achieve the centralized equivalent performance under efficient computation and communication costs. We also show that the proposed asynchronous decentralized learning algorithm can outperform a synchronous learning algorithm regarding computational complexity, especially when the network connections are sparse.
  •  
8.
  • Liang, Xinyue, et al. (författare)
  • Asynchronous Decentralized Learning of Randomization-based Neural Networks with Centralized Equivalence
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • Siloed data localization has become a big challenge for machine learning. Restricted by scattered locations and privacy regulations of information sharing, recent studies aim to develop collaborated machine learning techniques for local models to approximate the centralized performance without sharing real data. In this work, we design an asynchronous decentralized learning application that achieves centralized equivalent performance with low computational complexity and communication overhead. We propose an asynchronous decentralized learning algorithm (Async-dl) using ARock-based ADMM to realize the decentralized variants of various randomizationbased feedforward neural networks. The proposed algorithm enables single node activation and one-sided communication in an undirected and weighted communication network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm obtains a centralized solution with reduced computational cost and improved communication efficiency. We investigate the five scopes of randomization-based neural networks and apply Async-dl to realize their decentralized setup. The neural network architectures are extreme learning machine (ELM), random vector functional links (RVFL), deep random vector functional link (dRVFL), ensemble deep random vector functional link (edRVFL), self size-estimating feedforward neural networks (SSFN). We employ extensive experiments and show that the proposed asynchronous decentralized learning algorithm outperforms the synchronous learning algorithm regarding computational complexity and communication efficiency, reflected on the reduced training time, especially when the network connections are sparse. We also observe that the proposed algorithm is fault-tolerant that even if some communication fails, it will still converge to the centralized solution.
  •  
9.
  • Liang, Xinyue, et al. (författare)
  • Asynchrounous decentralized learning of a neural network
  • 2020
  • Ingår i: Proceedings IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 3947-3951
  • Konferensbidrag (refereegranskat)abstract
    • In this work, we exploit an asynchronous computing framework namely ARock to learn a deep neural network called self-size estimating feedforward neural network (SSFN) in a decentralized scenario. Using this algorithm namely asynchronous decentralized SSFN (dSSFN), we provide the centralized equivalent solution under certain technical assumptions. Asynchronous dSSFN relaxes the communication bottleneck by allowing one node activation and one side communication, which reduces the communication overhead significantly, consequently increasing the learning speed. We compare asynchronous dSSFN with traditional synchronous dSSFN in the experimental results, which shows the competitive performance of asynchronous dSSFN, especially when the communication network is sparse.
  •  
10.
  • Liang, Xinyue, et al. (författare)
  • AVIATOR: fAst Visual Perception and Analytics for Drone-Based Traffic Operations
  • 2023
  • Ingår i: 2023 IEEE 26th International Conference on Intelligent Transportation Systems, ITSC 2023. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 2959-2964
  • Konferensbidrag (refereegranskat)abstract
    • Drone-based system is an emerging technology for advanced applications in Intelligent Transport Systems (ITS). This paper presents our latest developments of a visual perception and analysis system, called AVIATOR, for drone-based road traffic management. The system advances from the previous SeeFar system in several aspects. For visual perception, deep-learning based computer vision models still play the central role but the current system development focuses on fast and efficient detection and tracking performance during real-time image processing. To achieve that, YOLOv7 and ByteTrack models have replaced the previous perception modules to gain better computational performance. Meanwhile, a lane-based traffic steam detection module is added for recognizing detailed traffic flow per lane, enabling more detailed estimation of traffic flow patterns. The traffic analytics module has been modified to estimate traffic states using lane-based data collection. This includes detailed lane-based traffic flow counting as well as traffic density estimation according to vehicle arrival patterns per lane.
  •  
11.
  • Liang, Xinyue (författare)
  • Decentralized Learning of Randomization-based Neural Networks
  • 2021
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • Machine learning and artificial intelligence have been wildly explored and developed very fast to adapt to the expanding need for almost every aspect of human development. When stepping into the big data era, siloed data localization has become a big challenge for machine learning. Restricted by scattered locations and privacy regulations of information sharing, recent studies aim to develop collaborated machine learning techniques for local models to approximate the centralized performance without sharing real data. Privacy preservation is as important as the model performance and the model complexity. This thesis aims to investigate the scopes of the low computational complexity learning model, randomization-based feed-forward neural networks (RFNs). As a class of artificial neural networks (ANNs), RFNs enjoy the favorable balance between low computational complexity and satisfying performance, especially for non-image data. Driven by the advantages of RFNs and the need for distributed learning resolutions, we aim to study the potential and applicability of RFNs and distributed optimization methods that may lead to the design of the decentralized variant of RFNs to deliver desired results.Firstly, we provide the decentralized learning algorithms based on RFN architectures for undirected network topology using synchronous communication. We investigate decentralized learning of five RFNs that provides centralized equivalent performance as if the total training data samples are available at a single node. Two of the five neural networks are shallow, and the others are deep. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. Then we are motivated to design an asynchronous decentralized learning application that achieves centralized equivalent performance with low computational complexity and communication overhead. We propose an asynchronous decentralized learning algorithm using ARock-based ADMM to realize the decentralized variants of a variety of RFNs. The proposed algorithm enables single node activation and one-sided communication in an undirected communication network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm obtains the centralized solution with reduced computational cost and improved communication efficiency. Finally, We consider the problem of training a neural net over a decentralized scenario with a high sparsity level in connections. The issue is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting.' While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them, and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario.
  •  
12.
  • Liang, Xinyue, et al. (författare)
  • Decentralized Learning of Randomization-based Neural Networks with Centralized Equivalence
  • Annan publikation (övrigt vetenskapligt/konstnärligt)abstract
    • We consider a decentralized learning problem where training data samples are distributed over agents (processing nodes) of an underlying communication network topology without any central (master) node. Due to information privacy and security issues in a decentralized setup, nodes are not allowed to share their training data with each other and only parameters of the neural network are allowed to be shared. This article investigates decentralized learning of randomization-based neural networks that provides centralized equivalent performance as if the full training data are available at a single node. We consider five randomization-based neural networks that use convex optimization for learning. Two of the five neural networks are shallow, and the others are deep. The use of convex optimization is the key to apply alternating-direction-method-of-multipliers (ADMM) with decentralized average consensus (DAC). This helps us to establish decentralized learning with centralized equivalence. For the underlying communication network topology, we use a doubly-stochastic network policy matrix and synchronous communications. Experiments with nine benchmark datasets showthat the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning.
  •  
13.
  • Liang, Xinyue, et al. (författare)
  • Decentralized learning of randomization-based neural networks with centralized equivalence
  • 2022
  • Ingår i: Applied Soft Computing. - : Elsevier BV. - 1568-4946 .- 1872-9681. ; 115
  • Tidskriftsartikel (refereegranskat)abstract
    • We consider a decentralized learning problem where training data samples are distributed over agents (processing nodes) of an underlying communication network topology without any central (master) node. Due to information privacy and security issues in a decentralized setup, nodes are not allowed to share their training data and only parameters of the neural network are allowed to be shared. This article investigates decentralized learning of randomization-based neural networks that provides centralized equivalent performance as if the full training data are available at a single node. We consider five randomization-based neural networks that use convex optimization for learning. Two of the five neural networks are shallow, and the others are deep. The use of convex optimization is the key to apply alternating-direction-method-of-multipliers with decentralized average consensus. This helps us to establish decentralized learning with centralized equivalence. For the underlying communication network topology, we use a doubly-stochastic network policy matrix and synchronous communications. Experiments with nine benchmark datasets show that the five neural networks provide good performance while requiring low computational and communication complexity for decentralized learning. The performance rankings of five neural networks using Friedman rank are also enclosed in the results, which are ELM < RVFL< dRVFL < edRVFL < SSFN.
  •  
14.
  • Liang, Xinyue, et al. (författare)
  • DeePMOS : Deep Posterior Mean-Opinion-Score of Speech
  • 2023
  • Ingår i: Interspeech 2023. - : International Speech Communication Association. ; , s. 526-530
  • Konferensbidrag (refereegranskat)abstract
    • We propose a deep neural network (DNN) based method that provides a posterior distribution of mean-opinion-score (MOS) for an input speech signal. The DNN outputs parameters of the posterior, mainly the posterior's mean and variance. The proposed method is referred to as deep posterior MOS (DeePMOS). The relevant training data is inherently limited in size (limited number of labeled samples) and noisy due to the subjective nature of human listeners. For robust training of DeePMOS, we use a combination of maximum-likelihood learning, stochastic gradient noise, and a student-teacher learning setup. Using the mean of the posterior as a point estimate, we evaluate standard performance measures of the proposed DeePMOS. The results show comparable performance with existing DNN-based methods that only provide point estimates of the MOS. Then we provide an ablation study showing the importance of various components in DeePMOS.
  •  
15.
  • Liang, Xinyue, et al. (författare)
  • DISTRIBUTED LARGE NEURAL NETWORK WITH CENTRALIZED EQUIVALENCE
  • 2018
  • Ingår i: 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP). - : IEEE. ; , s. 2976-2980
  • Konferensbidrag (refereegranskat)abstract
    • In this article, we develop a distributed algorithm for learning a large neural network that is deep and wide. We consider a scenario where the training dataset is not available in a single processing node, but distributed among several nodes. We show that a recently proposed large neural network architecture called progressive learning network (PLN) can be trained in a distributed setup with centralized equivalence. That means we would get the same result if the data be available in a single node. Using a distributed convex optimization method called alternating-direction-method-of-multipliers (ADMM), we perform training of PLN in the distributed setup.
  •  
16.
  • Liang, Xinyue, et al. (författare)
  • Feature Reuse For A Randomization Based Neural Network
  • 2021
  • Ingår i: 2021 Ieee International Conference On Acoustics, Speech And Signal Processing (ICASSP 2021). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 2805-2809
  • Konferensbidrag (refereegranskat)abstract
    • We propose a feature reuse approach for an existing multi-layer randomization based feedforward neural network. The feature representation is directly linked among all the necessary hidden layers. For the feature reuse at a particular layer, we concatenate features from the previous layers to construct a large-dimensional feature for the layer. The large-dimensional concatenated feature is then efficiently used to learn a limited number of parameters by solving a convex optimization problem. Experiments show that the proposed model improves the performance in comparison with the original neural network without a significant increase in computational complexity.
  •  
17.
  • Liang, Xinyue, et al. (författare)
  • Learning without Forgetting for Decentralized Neural Nets with Low Communication Overhead
  • 2021
  • Ingår i: 2020 28th European Signal Processing Conference (EUSIPCO). - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 2185-2189
  • Konferensbidrag (refereegranskat)abstract
    • We consider the problem of training a neural net over a decentralized scenario with a low communication over-head. The problem is addressed by adapting a recently proposed incremental learning approach, called `learning without forgetting'. While an incremental learning approach assumes data availability in a sequence, nodes of the decentralized scenario can not share data between them and there is no master node. Nodes can communicate information about model parameters among neighbors. Communication of model parameters is the key to adapt the `learning without forgetting' approach to the decentralized scenario. We use random walk based communication to handle a highly limited communication resource.
  •  
18.
  • Ma, Xiaoliang, Docent, et al. (författare)
  • METRIC : Toward a Drone-based Cyber-Physical Traffic Management System
  • 2022
  • Ingår i: Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics. - : Institute of Electrical and Electronics Engineers (IEEE). ; , s. 3324-3329
  • Konferensbidrag (refereegranskat)abstract
    • Drone-based system has a big potential to be applied for traffic monitoring and other advanced applications in Intelligent Transport Systems (ITS). This paper introduces our latest efforts of digitalising road traffic by various types of sensing systems, among which visual detection by drones provides a promising technical solution. A platform, called METRIC, is under recent development to carry out real-time traffic measurement and prediction using drone-based data collection. The current system is designed as a cyber-physical system (CPS) with essential functions aiming for visual traffic detection and analysis, real-time traffic estimation and prediction as well as decision supports based on simulation. In addition to the computer vision functions developed in the earlier stage, this paper also presents the CPS system architecture and the current implementation of the drone front-end system and a simulation-based system being used for further drone operations.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-18 av 18

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy