SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Lu Haibo) "

Sökning: WFRF:(Lu Haibo)

  • Resultat 1-10 av 18
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Lu, Haibo, et al. (författare)
  • Comparing machine learning-derived global estimates of soil respiration and its components with those from terrestrial ecosystem models
  • 2021
  • Ingår i: Environmental Research Letters. - : IOP Publishing. - 1748-9318 .- 1748-9326. ; 16:5
  • Tidskriftsartikel (refereegranskat)abstract
    • The CO2 efflux from soil (soil respiration (SR)) is one of the largest fluxes in the global carbon (C) cycle and its response to climate change could strongly influence future atmospheric CO2 concentrations. Still, a large divergence of global SR estimates and its autotrophic (AR) and heterotrophic (HR) components exists among process based terrestrial ecosystem models. Therefore, alternatively derived global benchmark values are warranted for constraining the various ecosystem model output. In this study, we developed models based on the global soil respiration database (version 5.0), using the random forest (RF) method to generate the global benchmark distribution of total SR and its components. Benchmark values were then compared with the output of ten different global terrestrial ecosystem models. Our observationally derived global mean annual benchmark rates were 85.5 ± 40.4 (SD) Pg C yr-1 for SR, 50.3 ± 25.0 (SD) Pg C yr-1 for HR and 35.2 Pg C yr-1 for AR during 1982-2012, respectively. Evaluating against the observations, the RF models showed better performance in both of SR and HR simulations than all investigated terrestrial ecosystem models. Large divergences in simulating SR and its components were observed among the terrestrial ecosystem models. The estimated global SR and HR by the ecosystem models ranged from 61.4 to 91.7 Pg C yr-1 and 39.8 to 61.7 Pg C yr-1, respectively. The most discrepancy lays in the estimation of AR, the difference (12.0-42.3 Pg C yr-1) of estimates among the ecosystem models was up to 3.5 times. The contribution of AR to SR highly varied among the ecosystem models ranging from 18% to 48%, which differed with the estimate by RF (41%). This study generated global SR and its components (HR and AR) fluxes, which are useful benchmarks to constrain the performance of terrestrial ecosystem models.
  •  
2.
  •  
3.
  • Ding, Zongling, et al. (författare)
  • The Finite-Size Effect on the Transport Properties in Edge-Modified Graphene Nanoribbon-Based Molecular Devices
  • 2011
  • Ingår i: Journal of Computational Chemistry. - : Wiley. - 0192-8651 .- 1096-987X. ; 32:8, s. 1753-1759
  • Tidskriftsartikel (refereegranskat)abstract
    • The size-dependence on the electronic and transport properties of the molecular devices of the edge-modified graphene nanoribbon (GNR) slices is investigated using density-functional theory and Green's function theory. Two edge-modifying functional group pairs are considered. Energy gap is found in all the GNR slices. The gap shows an exponential decrease with increasing the slice size of two vertical orientations in the two edge terminated cases, respectively. The tunneling probability and the number of conducting channel decreases with increasing the GNR-slices size in the junctions. The results indicate that the acceptor-donor pair edge modulation can improve the quantum conductance and decrease the finite-size effect on the transmission capability of the GNR slice-based molecular devices.
  •  
4.
  • Ding, Zongling, et al. (författare)
  • Transport Properties of Graphene Nanoribbon-Based Molecular Devices
  • 2011
  • Ingår i: Journal of Computational Chemistry. - : Wiley. - 0192-8651 .- 1096-987X. ; 32:4, s. 737-741
  • Tidskriftsartikel (refereegranskat)abstract
    • The electronic and transport properties of an edge-modified prototype graphene nanoribbon (GNR) slice are investigated using density functional theory and Green's function theory. Two decorating functional group pairs are considered, such as hydrogen-hydrogen and NH2-NO2 with NO2 and NH2 serving as a donor and an acceptor, respectively. The molecular junctions consist of carbon-based GNR slices sandwiched between Au electrodes. Nonlinear I-V curves and quantum conductance have been found in all the junctions. With increasing the source-drain bias, the enhancement of conductance is quantized. Several key factors determining the transport properties such as the electron transmission probabilities, the density of states, and the component of Frontier molecular orbitals have been discussed in detail. It has been shown that the transport properties are sensitive to the edge type of carbon atoms. We have also found that the accepter-donor functional pairs can cause orders of magnitude changes of the conductance in the junctions.
  •  
5.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Head Orientation Modeling : Geometric Head Pose Estimation using Monocular Camera
  • 2013
  • Ingår i: Proceedings of the 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013. - : The Institute of Industrial Applications Engineers. ; , s. 149-153
  • Konferensbidrag (övrigt vetenskapligt/konstnärligt)abstract
    • In this paper we propose a simple and novel method for head pose estimation using 3D geometric modeling. Our algorithm initially employs Haar-like features to detect face and facial features area (more precisely eyes). For robust tracking of these regions; it also uses Tracking- Learning- Detection(TLD) frame work in a given video sequence. Based on two human eye-areas, we model a pivot point using distance measure devised by anthropometric statistic and MPEG-4 coding scheme. This simple geometrical approach relies on human facial feature structure on the camera-view plane to estimate yaw, pitch and roll of the human head. The accuracy and effectiveness of our proposed method is reported on live video sequence considering head mounted inertial measurement unit (IMU).
  •  
6.
  • Khan, Muhammad Sikandar Lal, 1988-, et al. (författare)
  • Tele-embodied agent (TEA) for video teleconferencing
  • 2013
  • Ingår i: Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM 2013. - New York, NY, USA : Association for Computing Machinery (ACM). - 9781450326483
  • Konferensbidrag (refereegranskat)abstract
    • We propose a design of teleconference system which express nonverbal behavior (in our case head gesture) along with audio-video communication. Previous audio-video confer- encing systems are abortive in presenting nonverbal behav- iors which we, as human, usually use in face to face in- teraction. Recently, research in teleconferencing systems has expanded to include nonverbal cues of remote person in their distance communication. The accurate representation of non-verbal gestures for such systems is still challenging because they are dependent on hand-operated devices (like mouse or keyboard). Furthermore, they still lack in present- ing accurate human gestures. We believe that incorporating embodied interaction in video teleconferencing, (i.e., using the physical world as a medium for interacting with digi- tal technology) can result in nonverbal behavior represen- tation. The experimental platform named Tele-Embodied Agent (TEA) is introduced which incorperates remote per- son's head gestures to study new paradigm of embodied in- teraction in video teleconferencing. Our preliminary test shows accuracy (with respect to pose angles) and efficiency (with respect to time) of our proposed design. TEA can be used in medical field, factories, offices, gaming industry, music industry and for training.
  •  
7.
  • Lu, G., et al. (författare)
  • Convolutional neural network for facial expression recognition
  • 2016
  • Ingår i: Journal of Nanjing University of Posts and Telecommunications. - : Journal of Nanjing Institute of Posts and Telecommunication. - 1673-5439. ; 36:1, s. 16-22
  • Tidskriftsartikel (refereegranskat)abstract
    • To avoid the complex explicit feature extraction process in traditional expression recognition, a convolutional neural network (CNN) for the facial expression recognition is proposed. Firstly, the facial expression image is normalized and the implicit features are extracted by using the trainable convolution kernel. Then, the maximum pooling is used to reduce the dimensions of the extracted implicit features. Finally, the Softmax classifier is used to classify the facial expressions of the test samples. The experiment is carried out on the CK+ facial expression database using the graphics processing unit (GPU). Experimental results show the performance and the generalization ability of the CNN for facial expression recognition.
  •  
8.
  • Lu, G., et al. (författare)
  • Micro-expression recognition based on LBP-TOP features
  • 2017
  • Ingår i: Nanjing Youdian Daxue Xuebao (Ziran Kexue Ban)/Journal of Nanjing University of Posts and Telecommunications (Natural Science). - : Journal of Nanjing Institute of Posts and Telecommunications. - 1673-5439. ; 37:6, s. 1-7
  • Tidskriftsartikel (refereegranskat)abstract
    • Micro-expressions are involuntary facial expressions revealing true feelings when a person tries to conceal facial expressions.Compared with normal facial expressions,the most significant characteristic of micro-expressions is their short duration and weak intensity,thus it is diffcult to be recognized.In this paper,a micro-expression recognition method based on local binary pattern from three orthogonal plane(LBP-TOP) features and support vector machine (SVM)-based classifier is proposed.Firstly,the LBP-TOP operators are used to extract micro-expression features.Then,the feature selection algorithm combining the ReliefF with manifold learning algorithm based on locally linear embedding (LLE) is proposed to reduce the dimensionality of extracted LBP-TOP feature vectors.Finally,the SVM-based classifier with radial basis function (RBF) kernel is used to classify test samples into five categories of micro-expressions:happiness,disgust,repression,surprise,and others.Experiments are carried out on the micro-expression database CASME II using leave-one-subject-out cross validation (LOSO-CV) method.The classification accuracy can reach 58.98%.Experimental results show the effectiveness of the proposed method. 
  •  
9.
  • Lu, G., et al. (författare)
  • Multi-modal emotion feature fusion method based on genetic algorithm
  • 2019
  • Ingår i: Journal of Nanjing University of Posts and Telecommunications. - : Journal of Nanjing Institute of Posts and Telecommunications. - 1673-5439. ; 39:5, s. 41-47
  • Tidskriftsartikel (refereegranskat)abstract
    • To improve the accuracy of the emotion recognition,aiming at the low accuracy of the single-modal emotion recognition and the shortcomings of conventional feature fusion methods,this paper proposes a multi-modal emotion feature fusion method based on a genetic algorithm.The features from multiple emotion modalities are selected,crossed and recombined by using the genetic algorithm.The emotion recognition tests are carried out in eNTRAFACE'05 audio-visual emotion database.The performance of the single-modal emotion recognition based on the facial expression or speech and various bimodal emotion recognitions based on the feature level or the decision level fusion are compared.Experimental results show that the performance of the bimodal emotion recognition is better than that of the single-modal emotion recognition,and the proposed method based on genetic algorithm is superior to other conventional feature fusion methods,thus demonstrating the feasibility and the effectiveness of the proposed method.
  •  
10.
  • Lu, G., et al. (författare)
  • Speech emotion recognition based on long short-term memory and convolutional neural networks
  • 2018
  • Ingår i: Journal of Nanjing University of Posts and Telecommunications. - : Journal of Nanjing Institute of Posts and Telecommunications. - 1673-5439. ; 38:5, s. 63-69
  • Tidskriftsartikel (refereegranskat)abstract
    • To improve the accuracy of speech emotion recognition, a speech emotion recognition method is proposed based on long short-term memory (LSTM) and convolutional neural network (CNN). Firstly, the Mel-frequency spectrum sequence of the speech signal is extracted, and then it is inputted into the LSTM network to extract the temporal context features of the speech signals. On this basis, CNN is used to extract high-level emotional features from low-level features and complete emotional classification of speech signals. The emotion recognition tests are carried out on eNTRAFACE'05, RML and AFEW6.0 databases. The experimental results show that the average recognition rates of the method on the above-mentioned three databases are 49.15%, 85.38% and 37.90%. In addition, compared with the traditional speech emotion recognition method and the speech emotion recognition method based on LSTM or CNN, the method has better recognition performance.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-10 av 18

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy