SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "WFRF:(Ban Bo) "

Search: WFRF:(Ban Bo)

  • Result 1-21 of 21
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Abelev, Betty, et al. (author)
  • Long-range angular correlations on the near and away side in p-Pb collisions at root S-NN=5.02 TeV
  • 2013
  • In: Physics Letters. Section B: Nuclear, Elementary Particle and High-Energy Physics. - : Elsevier BV. - 0370-2693. ; 719:1-3, s. 29-41
  • Journal article (peer-reviewed)abstract
    • Angular correlations between charged trigger and associated particles are measured by the ALICE detector in p-Pb collisions at a nucleon-nucleon centre-of-mass energy of 5.02 TeV for transverse momentum ranges within 0.5 < P-T,P-assoc < P-T,P-trig < 4 GeV/c. The correlations are measured over two units of pseudorapidity and full azimuthal angle in different intervals of event multiplicity, and expressed as associated yield per trigger particle. Two long-range ridge-like structures, one on the near side and one on the away side, are observed when the per-trigger yield obtained in low-multiplicity events is subtracted from the one in high-multiplicity events. The excess on the near-side is qualitatively similar to that recently reported by the CMS Collaboration, while the excess on the away-side is reported for the first time. The two-ridge structure projected onto azimuthal angle is quantified with the second and third Fourier coefficients as well as by near-side and away-side yields and widths. The yields on the near side and on the away side are equal within the uncertainties for all studied event multiplicity and p(T) bins, and the widths show no significant evolution with event multiplicity or p(T). These findings suggest that the near-side ridge is accompanied by an essentially identical away-side ridge. (c) 2013 CERN. Published by Elsevier B.V. All rights reserved.
  •  
2.
  • Abelev, Betty, et al. (author)
  • Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
  • 2012
  • In: Journal of High Energy Physics. - 1029-8479. ; :11
  • Journal article (peer-reviewed)abstract
    • The ALICE experiment at the LHC has studied J/psi production at mid-rapidity in pp collisions at root s = 7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity L-int = 5.6 nb(-1). The fraction of J/psi from the decay of long-lived beauty hadrons was determined for J/psi candidates with transverse momentum p(t) > 1,3 GeV/c and rapidity vertical bar y vertical bar < 0.9. The cross section for prompt J/psi mesons, i.e. directly produced J/psi and prompt decays of heavier charmonium states such as the psi(2S) and chi(c) resonances, is sigma(prompt J/psi) (p(t) > 1.3 GeV/c, vertical bar y vertical bar < 0.9) = 8.3 +/- 0.8(stat.) +/- 1.1 (syst.)(-1.4)(+1.5) (syst. pol.) mu b. The cross section for the production of b-hadrons decaying to J/psi with p(t) > 1.3 GeV/c and vertical bar y vertical bar < 0.9 is a sigma(J/psi <- hB) (p(t) > 1.3 GeV/c, vertical bar y vertical bar < 0.9) = 1.46 +/- 0.38 (stat.)(-0.32)(+0.26) (syst.) mu b. The results are compared to QCD model predictions. The shape of the p(t) and y distributions of b-quarks predicted by perturbative QCD model calculations are used to extrapolate the measured cross section to derive the b (b) over bar pair total cross section and d sigma/dy at mid-rapidity.
  •  
3.
  • Bo, Mao, et al. (author)
  • Real-time visualization of 3D city models at street-level based on visual saliency
  • 2015
  • In: Science China: Earth Sciences. - : Springer Science and Business Media LLC. - 1674-7313 .- 1869-1897. ; 58:3, s. 448-461
  • Journal article (peer-reviewed)abstract
    • Street-level visualization is an important application of 3D city models. Challenges to street-level visualization include the cluttering of buildings due to fine detail and visualization performance. In this paper, a novel method is proposed for street-level visualization based on visual saliency evaluation. The basic idea of the method is to preserve these salient buildings in a scene while removing those that are non-salient. The method can be divided into pre-processing procedures and real-time visualization. The first step in pre-processing is to convert 3D building models at higher Levels of Detail (LoDs) into LoD1 models with simplified ground plans. Then, a number of index viewpoints are created along the streets; these indices refer to both the position and the direction of each street site. A visual saliency value is computed for each building, with respect to the index site, based on a visual difference between the original model and the generalized model. We calculate and evaluate three methods for visual saliency: local difference, global difference and minimum projection area. The real-time visualization process begins by mapping the observer to its closest indices. The street view is then generated based on the building information stored in those indexes. A user study shows that the local visual saliency method performs better than do the global visual saliency, area and image-based methods and that the framework proposed in this paper may improve the performance of 3D visualization.
  •  
4.
  •  
5.
  •  
6.
  • Mao, Bo, 1983-, et al. (author)
  • A Dynamic Typification Method of 3D City Models using Minimum Spanning Tree
  • 2010
  • In: Proc. 6th international conference on Geographic Information Science.
  • Conference paper (peer-reviewed)abstract
    • A novel method based on MST for 3D City model typification is proposed. The 3D building models in higher LODs are converted into LOD1 with simplified ground plan and height. Minimum spanning tree (MST) of the ground plan centroid is generated and divided into sub-MSTs by road network. The building lists in each sub-MST with linear structure are selected, based on which typification model is created. According to the visualization evaluation and experiments, our method can reduce the building numbers while preserve the visual similarity well for selected city area.
  •  
7.
  • Mao, Bo, 1983-, et al. (author)
  • A Framework of Online 3D City Visualization using CityGML and X3D
  • 2009
  • In: The 6th International Symposium on Digital Earth.
  • Conference paper (peer-reviewed)abstract
    • In this paper, a novel framework based on CityGML and X3D is proposed to support visualization of 3D City Model through Internet. In the proposed framework, the CityGML files are first parsed to acquire the city model information. Citygml4j, an open source java API, is used for this parsing. Then, the X3D representation is generated based on the city model by the proposed algorithm which can dynamically create different 3D city models according to corresponding Levels of Detail (LOD). Finally, the 3D city scene in X3D format is displayed through Internet with java applet or other X3D viewers. The Java Applets are created using the Xj3D toolkit. The preliminary experiment shows that the framework can correctly and efficiently exhibit the3D city model via Internet.
  •  
8.
  •  
9.
  • Mao, Bo, 1983-, et al. (author)
  • A multiple representation data structure for dynamic visualisation of generalised 3D city models
  • 2011
  • In: ISPRS Journal of Photogrammetry and Remote Sensing. - : Elsevier BV. - 0924-2716 .- 1872-8235. ; 66:2, s. 198-208
  • Journal article (peer-reviewed)abstract
    • In this paper, a novel multiple representation data structure for dynamic visualisation of 3D city models, called CityTree, is proposed. To create a CityTree, the ground plans of the buildings are generated and simplified. Then, the buildings are divided into clusters by the road network and one CityTree is created for each cluster. The leaf nodes of the CityTree represent the original 3D objects of each building, and the intermediate nodes represent groups of close buildings. By utilising CityTree, it is possible to have dynamic zoom functionality in real time. The CityTree methodology is implemented in a framework where the original city model is stored in CityGML and the CityTree is stored as X3D scenes. A case study confirms the applicability of the CityTree for dynamic visualisation of 3D city models. (C) 2010 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS). Published by Elsevier B.V. All rights reserved.
  •  
10.
  • Mao, Bo, 1983-, et al. (author)
  • A multiple representation data structure of 3D building textures
  • 2011
  • Conference paper (other academic/artistic)abstract
    • Texture is an important element for visualization of 3D city models and often takes a big proportion of the total data volume. In order to simplify 3D city model textures for dynamic visualization in different scales, a multiple representation data structure, TextureTree is proposed to store building textures in different LoDs. First, the texture image is iteratively segmented by horizontal or vertical dividing zone (edge or background from edge detection) until each sections are basically in the same color. Then textures in all sections are represented by their main color and the TextureTree is created based on the color difference between the adjacent sections. With the TextureTree, the simplified texture in different LoDs can be dynamically generated. The experiment results show that the data volume of building textures can be reduced by TextureTree while the required visual similarity is preserved.
  •  
11.
  • Mao, Bo, et al. (author)
  • City Model Generalization Quality Assessment using Nested Structure of Earth Mover’s Distance
  • 2010
  • Conference paper (other academic/artistic)abstract
    • To evaluate the quality of city model generalization, an attributed relational graph (ARG) is used to represent the features of city models and Nested structure of Earth Mover's Distance (NEMD) is employed to calculate the visual similarity of the ARGs. The experiments show that the proposed method is coherence with user survey result.
  •  
12.
  •  
13.
  • Mao, Bo, 1983-, et al. (author)
  • Detection and typification of linear structures for dynamic visualization of 3D city models
  • 2012
  • In: Computers, Environment and Urban Systems. - : Elsevier BV. - 0198-9715 .- 1873-7587. ; 36:3, s. 233-244
  • Journal article (peer-reviewed)abstract
    • Cluttering is a fundamental problem in 3D city model visualization. In this paper, a novel method for removing cluttering by typification of linear building groups is proposed. This method works. in static as well as dynamic visualization of 3D city models. The method starts by converting building models in higher Levels of Details (LoDs) into LoD1 with ground plan and height. Then the Minimum Spanning Tree (MST) is generated according to the distance between the building ground plans. Based on the MST, linear building groups are detected for typification. The typification level of a building group is determined by its distance to the viewpoint as well as its viewing angle. Next, the selected buildings are removed and the remaining ones are adjusted in each group separately. To preserve the building features and their spatial distribution, Attributed Relational Graph (ARC) and Nested Earth Mover's Distance (NEMD) are used to evaluate the difference between the original building objects and the generalized ones. The experimental results indicate that our method can reduce the number of buildings while preserving the visual similarity of the urban areas. (C) 2011 Elsevier Ltd. All rights reserved.
  •  
14.
  • Mao, Bo, et al. (author)
  • Dynamic Online 3D Visualization Framework for Real-Time Energy Simulation Based on 3D Tiles
  • 2020
  • In: ISPRS International Journal of Geo-Information. - : MDPI. - 2220-9964. ; 9:3
  • Journal article (peer-reviewed)abstract
    • Energy co-simulation can be used to analyze the dynamic energy consumption of a building or a region, which is essential for decision making in the planning and management of smart cities. To increase the accessibility of energy simulation results, a dynamic online 3D city model visualization framework based on 3D Tiles is proposed in this paper. Two types of styling methods are studied, attribute-based and ID map-based. We first perform the energy co-simulation and save the results in CityGML format with EnergyADE. Then the 3D geometry data of these city objects are combined with its simulation results as attributes or just with object ID information to generate Batched 3D Models (B3DM) in 3D Tiles. Next, styling strategies are pre-defined and can be selected by end-users to show different scenarios. Finally, during the visualization process, dynamic interactions and data sources are integrated into the styling generation to support real-time visualization. This framework is implemented with Cesium. Compared with existing dynamic online 3D visualization framework such as directly styling or Cesium Language (CZML), a JSON format for describing a time-dynamic graphical scene, primarily for display in a web browser running Cesium, the proposed framework is more flexible and has higher performance in both data transmission and rendering which is essential for real-time GIS applications.
  •  
15.
  • Mao, Bo, 1983-, et al. (author)
  • Generalisation of textured 3D city models using image compression and multiple representation data structure
  • 2013
  • In: ISPRS journal of photogrammetry and remote sensing (Print). - : Elsevier BV. - 0924-2716 .- 1872-8235. ; 79, s. 68-79
  • Journal article (peer-reviewed)abstract
    • Texture is an essential part of 3D building models and it often takes up a big proportion of the data volume, thus makes dynamic visualization difficult. To compress the texture of 3D building models for the dynamic visualization in different scales, a multi-resolution texture generalization method is proposed, which contains two steps: texture image compression and texture coloring. In the first step, the texture images are compressed in both horizontal and vertical directions using wavelet transform. In the second step, TextureTreeis created to store the building color texture for the dynamic visualization from different distances. To generate TextureTree, texture images are iteratively segmented by horizontal and vertical dividing zone, e.g. edge or background from edge detection, until each section is basically in the same color. Thentexture in each section is represented by their main color and the TextureTree iscreated based on the color difference between the adjacent sections. In dynamic visualization, the suitable compressed texture images or the TextureTree nodes are selected to generate the 3D scenes based on the angle and the distance between user viewpoint and the building surface. The experimental results indicate that the wavelet based image compression and proposed TextureTree can effectively represent the visual features of the textured buildings with much less data.
  •  
16.
  • Mao, Bo, 1983-, et al. (author)
  • Online Visualisation of a 3D City Model Using CityGML and X3DOM
  • 2011
  • In: Cartographica. - : University of Toronto Press Inc. (UTPress). - 0317-7173 .- 1911-9925. ; 46:2, s. 109-114
  • Journal article (peer-reviewed)abstract
    • This article proposes a novel framework for online visualization of 3D city models. CityGML is used to represent the city models, based on which 3D scenes in X3D are generated, then dynamically updated to the user side with AJAX and visualized in WebGL-supported browsers with X3DOM. The experimental results show that the proposed framework can easily be implemented using widely supported major browsers and can efficiently support online visualization of 3D city models in small areas. For the visualization of large volumes of data, generalization methods and multiple-representation data structure should be studied in future research.
  •  
17.
  • Mao, Bo, 1983-, et al. (author)
  • Real time visualisation of 3D city models in street view based on visual salience
  • In: International Journal of Geographical Information Science. - 1365-8816 .- 1365-8824.
  • Journal article (peer-reviewed)abstract
    • Street level visualization is an important application of the 3D city models. Challenges in the street level visualization are the cluttering of the detailed buildings and the performance. In this paper, a novel method for street level visualization based on visual salience evaluation is proposed. The basic idea of the method is to preserve these salient buildings in a view and remove the non-salient ones. The method is composed by pre-process and real-timevisualization. The pre-process starts by converting 3D building models in higher Levels of Detail (LoDs) into LoD1 with simplified ground plan. Then a number of index view points are created along the streets; these indexes refer both to the positions and the direction of the sights. A visual salience value is computed for each visible simplified building in respective index. The salience of the visible building is calculated based on the visual difference of the original and generalized models. We propose and evaluate three methods for visual salience: local difference, global difference and minimum projection area. The real-time visualization process starts by mapping the observer to its closest indexes. Then the street view is generated based on the building information stored in theindexes. A user study shows that the local visual salience gives better result than the global and area, and the proposed method can reduce the number of loaded building by 90% while still preserve the visual similarity with the original models.
  •  
18.
  • Mao, Bo, 1983- (author)
  • Visualisation and Generalisation of 3D City Models
  • 2011
  • Doctoral thesis (other academic/artistic)abstract
    • 3D city models have been widely used in various applications such as urban planning, traffic control, disaster management etc. Efficient visualisation of 3D city models in different levels of detail (LODs) is one of the pivotal technologies to support these applications. In this thesis, a framework is proposed to visualise the 3D city models online. Then, generalisation methods are studied and tailored to create 3D city scenes in different scales dynamically. Multiple representation structures are designed to preserve the generalisation results on different level. Finally, the quality of the generalised 3D city models is evaluated by measuring the visual similarity with the original models.   In the proposed online visualisation framework, City Geography Makeup Language (CityGML) is used to represent city models, then 3D scenes in Extensible 3D (X3D) are generated from the CityGML data and dynamically updated to the user side for visualisation in the Web-based Graphics Library (WebGL) supported browsers with X3D Document Object Model (X3DOM) technique. The proposed framework can be implemented at the mainstream browsers without specific plugins, but it can only support online 3D city model visualisation in small area. For visualisation of large data volumes, generalisation methods and multiple representation structures are required.   To reduce the 3D data volume, various generalisation methods are investigated to increase the visualisation efficiency. On the city block level, the aggregation and typification methods are improved to simplify the 3D city models. On the street level, buildings are selected according to their visual importance and the results are stored in the indexes for dynamic visualisation. On the building level, a new LOD, shell model, is introduced. It is the exterior shell of LOD3 model, in which the objects such as windows, doors and smaller facilities are projected onto walls.  On the facade level, especially for textured 3D buildings, image processing and analysis methods are employed to compress the texture.   After the generalisation processes on different levels, multiple representation data structures are required to store the generalised models for dynamic visualisation. On the city block level the CityTree, a novel structure to represent group of buildings, is tested for building aggregation. According to the results, the generalised 3D city model creation time is reduced by more than 50% by using the CityTree. Meanwhile, a Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. On the building level and the street level, the visible building index is created along the road to support building selection. On facade level the TextureTree, a structure to represent building facade texture, is created based on the texture segmentation.   Different generalisation strategies lead to different outcomes. It is critical to evaluate the quality of the generalised models. Visually salient features of the textured building models such as size, colour, height, etc. are employed to calculate the visual difference between the original and the generalised models. Visual similarity is the criterion in the street view level building selection. In this thesis, the visual similarity is evaluated locally and globally. On the local level, the projection area and the colour difference between the original and the generalised models are considered. On the global level, the visual features of the 3D city models are represented by Attributed Relation Graphs (ARG) and their similarity distances are calculated with the Nested Earth Mover’s Distance (NEMD) algorithm.   The overall contribution of this thesis is that 3D city models are generalised in different scales (block, street, building and facade) and the results are stored in multiple representation structures for efficient dynamic visualisation, especially for online visualisation.
  •  
19.
  • Mao, Bo, 1983- (author)
  • Visualisation and Generalisation of 3D City Models
  • 2010
  • Licentiate thesis (other academic/artistic)abstract
    • 3D city models have been widely used in different applications such as urban planning, traffic control, disaster management etc. Effective visualisation of 3D city models in various scales is one of the pivotal techniques to implement these applications. In this thesis, a framework is proposed to visualise the 3D city models both online and offline using City Geography Makeup Language (CityGML) and Extensible 3D (X3D) to represent and present the models. Then, generalisation methods are studied and tailored to create 3D city scenes in multi-scale dynamically. Finally, the quality of generalised 3D city models is evaluated by measuring the visual similarity from the original models.   In the proposed visualisation framework, 3D city models are stored in CityGML format which supports both geometric and semantic information. These CityGML files are parsed to create 3D scenes and be visualised with existing 3D standard. Because the input and output in the framework are all standardised, it is possible to integrate city models from different sources and visualise them through the different viewers.   Considering the complexity of the city objects, generalisation methods are studied to simplify the city models and increase the visualisation efficiency. In this thesis, the aggregation and typification methods are improved to simplify the 3D city models.   Multiple representation data structures are required to store the generalisation information for dynamic visualisation. One of these is the CityTree, a novel structure to represent building group, which is tested for building aggregation. Meanwhile, Minimum Spanning Tree (MST) is employed to detect the linear building group structures in the city models and they are typified with different strategies. According to the experiments results, by using the CityTree, the generalised 3D city model creation time is reduced by more than 50%.   Different generalisation strategies lead to different outcomes. It is important to evaluate the quality of the generalised models. In this thesis a new evaluation method is proposed: visual features of the 3D city models are represented by Attributed Relation Graph (ARG) and their similarity distances are calculated with Nested Earth Mover’s Distance (NEMD) algorithm. The calculation results and user survey show that the ARG and NEMD methods can reflect the visual similarity between generalised city models and the original ones.
  •  
20.
  • Zhang, Tao, et al. (author)
  • Ship detection using the surface scattering similarity and scattering power
  • 2019
  • In: 2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019). - : IEEE. - 9781538691540 ; , s. 1264-1267
  • Conference paper (peer-reviewed)abstract
    • Sea surface and ship have different backscattering mechanisms, in which surface scattering is predominant for sea surface in the low sea state case. Based on this fact, many ship detectors have been developed by suppressing the surface scattering resulted from sea surface. Actually, small ship may also have strong surface scattering sometimes. In such a case, the methods of avoiding using surface scattering features may easily miss the detection of small ships. To verify this point, in this paper, we first analyze the shortcomings of An's method which is based on surface scattering similarity and the power maximization synthesis detector (PMS), and then improve it for detecting small ships more effectively. In order to demonstrate the performance of the proposed method, AIRSAR L-Band Polarimetric SAR dataset is exploited. Comparing to other methods, the new method shows a better ship detection performance.
  •  
21.
  • 2019
  • Journal article (peer-reviewed)
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-21 of 21
Type of publication
journal article (10)
conference paper (9)
doctoral thesis (1)
licentiate thesis (1)
Type of content
peer-reviewed (17)
other academic/artistic (4)
Author/Editor
Harrie, Lars (10)
Stenlund, Evert (2)
Blanco, F. (2)
Christiansen, Peter (2)
Dobrin, Alexandru (2)
Majumdar, A. K. Dutt ... (2)
show more...
Gros, Philippe (2)
Kurepin, A. (2)
Kurepin, A. B. (2)
Malinina, Ludmila (2)
Milosevic, Jovan (2)
Ortiz Velasquez, Ant ... (2)
Richert, Tuva (2)
Sogaard, Carsten (2)
Peskov, Vladimir (2)
Abelev, Betty (2)
Adam, Jaroslav (2)
Adamova, Dagmar (2)
Adare, Andrew Marsha ... (2)
Aggarwal, Madan (2)
Rinella, Gianluca Ag ... (2)
Agostinelli, Andrea (2)
Ahammed, Zubayer (2)
Ahmad, Nazeer (2)
Ahmad, Arshad (2)
Ahn, Sul-Ah (2)
Ahn, Sang Un (2)
Akindinov, Alexander (2)
Aleksandrov, Dmitry (2)
Alessandro, Bruno (2)
Alici, Andrea (2)
Alkin, Anton (2)
Almaraz Avina, Erick ... (2)
Alme, Johan (2)
Alt, Torsten (2)
Altini, Valerio (2)
Altinpinar, Sedat (2)
Altsybeev, Igor (2)
Andrei, Cristian (2)
Andronic, Anton (2)
Anguelov, Venelin (2)
Anielski, Jonas (2)
Anson, Christopher D ... (2)
Anticic, Tome (2)
Antinori, Federico (2)
Antonioli, Pietro (2)
Aphecetche, Laurent ... (2)
Appelshauser, Harald (2)
Arbor, Nicolas (2)
Arcelli, Silvia (2)
show less...
University
Royal Institute of Technology (15)
Lund University (9)
University of Gothenburg (1)
Uppsala University (1)
Halmstad University (1)
Stockholm University (1)
show more...
Chalmers University of Technology (1)
Karolinska Institutet (1)
show less...
Language
English (21)
Research subject (UKÄ/SCB)
Natural sciences (19)
Engineering and Technology (3)
Medical and Health Sciences (1)
Social Sciences (1)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view