SwePub
Sök i SwePub databas

  Extended search

Träfflista för sökning "L773:1077 2626 OR L773:1941 0506 OR L773:2160 9306 "

Search: L773:1077 2626 OR L773:1941 0506 OR L773:2160 9306

  • Result 1-50 of 117
Sort/group result
   
EnumerationReferenceCoverFind
1.
  • Dai, Shaozhang, et al. (author)
  • RoboHapalytics: A Robot Assisted Haptic Controller for Immersive Analytics
  • 2023
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506 .- 2160-9306. ; 29:1, s. 451-461
  • Journal article (peer-reviewed)abstract
    • Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typicalmid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of hapticcontrols to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range andresolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm withhand tracking to align tangible controls under the user’s fingers as they reach out to interact with data affordances. We begin witha study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtualmid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-airinteraction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction withimmersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactiveslicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data
  •  
2.
  • Falk, Martin, Dr.rer.nat. 1981-, et al. (author)
  • Interactive Visualization of 3D Histopathology in Native Resolution
  • 2019
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506 .- 2160-9306. ; 25:1, s. 1008-1017
  • Journal article (peer-reviewed)abstract
    • We present a visualization application that enables effective interactive visual analysis of large-scale 3D histopathology, that is, high-resolution 3D microscopy data of human tissue. Clinical work flows and research based on pathology have, until now, largely been dominated by 2D imaging. As we will show in the paper, studying volumetric histology data will open up novel and useful opportunities for both research and clinical practice. Our starting point is the current lack of appropriate visualization tools in histopathology, which has been a limiting factor in the uptake of digital pathology. Visualization of 3D histology data does pose difficult challenges in several aspects. The full-color datasets are dense and large in scale, on the order of 100,000 x 100,000 x 100 voxels. This entails serious demands on both rendering performance and user experience design. Despite this, our developed application supports interactive study of 3D histology datasets at native resolution. Our application is based on tailoring and tuning of existing methods, system integration work, as well as a careful study of domain specific demands emanating from a close participatory design process with domain experts as team members. Results from a user evaluation employing the tool demonstrate a strong agreement among the 14 participating pathologists that 3D histopathology will be a valuable and enabling tool for their work.
  •  
3.
  • Nonato, Luis Gustavo, et al. (author)
  • Multidimensional Projection for Visual Analytics : Linking Techniques with Distortions, Tasks, and Layout Enrichment
  • 2019
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506 .- 2160-9306. ; 25:8, s. 2650-2673
  • Journal article (peer-reviewed)abstract
    • The ultimate goal of multiobjective optimization is to help a decision maker (DM) identify solution(s) of interest (SOI) achieving satisfactory tradeoffs among multiple conflicting criteria. This can be realized by leveraging DM's preference information in evolutionary multiobjective optimization (EMO). No consensus has been reached on the effectiveness brought by incorporating preference in EMO (either a priori or interactively) versus a posteriori decision making after a complete run of an EMO algorithm. Bearing this consideration in mind, this article: 1) provides a pragmatic overview of the existing developments of preference-based EMO (PBEMO) and 2) conducts a series of experiments to investigate the effectiveness brought by preference incorporation in EMO for approximating various SOI. In particular, the DM's preference information is elicited as a reference point, which represents her/his aspirations for different objectives. The experimental results demonstrate that preference incorporation in EMO does not always lead to a desirable approximation of SOI if the DM's preference information is not well utilized, nor does the DM elicit invalid preference information, which is not uncommon when encountering a black-box system. To a certain extent, this issue can be remedied through an interactive preference elicitation. Last but not the least, we find that a PBEMO algorithm is able to be generalized to approximate the whole PF given an appropriate setup of preference information.
  •  
4.
  • Sharma, Mohit, et al. (author)
  • Continuous Scatterplot Operators for Bivariate Analysis and Study of Electronic Transitions
  • 2023
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506 .- 2160-9306.
  • Journal article (peer-reviewed)abstract
    • Electronic transitions in molecules due to the absorption or emission of light is a complex quantum mechanical process. Their study plays an important role in the design of novel materials. A common yet challenging task in the study is to determine the nature of electronic transitions, namely which subgroups of the molecule are involved in the transition by donating or accepting electrons, followed by an investigation of the variation in the donor-acceptor behavior for different transitions or conformations of the molecules. In this paper, we present a novel approach for the analysis of a bivariate field and show its applicability to the study of electronic transitions. This approach is based on two novel operators, the continuous scatterplot (CSP) lens operator and the CSP peel operator, that enable effective visual analysis of bivariate fields. Both operators can be applied independently or together to facilitate analysis. The operators motivate the design of control polygon inputs to extract fiber surfaces of interest in the spatial domain. The CSPs are annotated with a quantitative measure to further support the visual analysis. We study different molecular systems and demonstrate how the CSP peel and CSP lens operators help identify and study donor and acceptor characteristics in molecular systems.
  •  
5.
  • Yan, Lin, et al. (author)
  • Geometry Aware Merge Tree Comparisons for Time-Varying Data with Interleaving Distances
  • 2023
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506 .- 2160-9306. ; 29:8, s. 3489-3506
  • Journal article (peer-reviewed)abstract
    • Merge trees, a type of topological descriptor, serve to identify and summarize the topological characteristics associated with scalar fields. They present a great potential for the analysis and visualization of time-varying data. First, they give compressed and topology-preserving representations of data instances. Second, their comparisons provide a basis for studying the relations among data instances, such as their distributions, clusters, outliers, and periodicities. A number of comparative measures have been developed for merge trees. However, these measures are often computationally expensive since they implicitly consider all possible correspondences between critical points of the merge trees. In this paper, we perform geometry-aware comparisons of merge trees. The main idea is to decouple the computation of a comparative measure into two steps: a labeling step that generates a correspondence between the critical points of two merge trees, and a comparison step that computes distances between a pair of labeled merge trees by encoding them as matrices. We show that our approach is general, computationally efficient, and practically useful. Our general framework makes it possible to integrate geometric information of the data domain in the labeling process. At the same time, it reduces the computational complexity since not all possible correspondences have to be considered. We demonstrate via experiments that such geometry-aware merge tree comparisons help to detect transitions, clusters, and periodicities of a time-varying dataset, as well as to diagnose and highlight the topological changes between adjacent data instances.
  •  
6.
  • Angelopoulos, Anastasios N., et al. (author)
  • Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz
  • 2021
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 27:5, s. 2577-2586
  • Journal article (peer-reviewed)abstract
    • The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g., low latency predictive rendering, or to study quick and subtle eye motions like microsaccades using head-mounted devices in the wild. Here, we propose a hybrid frame-event-based near-eye gaze tracking system offering update rates beyond 10,000 Hz with an accuracy that matches that of high-end desktop-mounted commercial trackers when evaluated in the same conditions. Our system, previewed in Figure 1, builds on emerging event cameras that simultaneously acquire regularly sampled frames and adaptively sampled events. We develop an online 2D pupil fitting method that updates a parametric model every one or few events. Moreover, we propose a polynomial regressor for estimating the point of gaze from the parametric pupil model in real time. Using the first event-based gaze dataset, we demonstrate that our system achieves accuracies of 0.45 degrees -1.75 degrees for fields of view from 45 degrees to 98 degrees. With this technology, we hope to enable a new generation of ultra-low-latency gaze-contingent rendering and display techniques for virtual and augmented reality.
  •  
7.
  • Bae, S. Sandra, et al. (author)
  • A Computational Design Pipeline to Fabricate Sensing Network Physicalizations
  • 2024
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 30:1, s. 913-923
  • Journal article (peer-reviewed)abstract
    • Interaction is critical for data analysis and sensemaking. However, designing interactive physicalizations is challenging as it requires cross-disciplinary knowledge in visualization, fabrication, and electronics. Interactive physicalizations are typically produced in an unstructured manner, resulting in unique solutions for a specific dataset, problem, or interaction that cannot be easily extended or adapted to new scenarios or future physicalizations. To mitigate these challenges, we introduce a computational design pipeline to 3D print network physicalizations with integrated sensing capabilities. Networks are ubiquitous, yet their complex geometry also requires significant engineering considerations to provide intuitive, effective interactions for exploration. Using our pipeline, designers can readily produce network physicalizations supporting selection-the most critical atomic operation for interaction-by touch through capacitive sensing and computational inference. Our computational design pipeline introduces a new design paradigm by concurrently considering the form and interactivity of a physicalization into one cohesive fabrication workflow. We evaluate our approach using (i) computational evaluations, (ii) three usage scenarios focusing on general visualization tasks, and (iii) expert interviews. The design paradigm introduced by our pipeline can lower barriers to physicalization research, creation, and adoption.
  •  
8.
  • Barrera, T, et al. (author)
  • Faster shading by equal angle interpolation of vectors
  • 2004
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 10:2, s. 217-223
  • Journal article (peer-reviewed)abstract
    • In this paper, we show how spherical linear interpolation can be used to produce shading with a quality at least similar to Phong shading at a computational effort in the inner loop that is close to that of the Gouraud method. We show how to use the Chebyshev's recurrence relation in order to compute the shading very efficiently. Furthermore, it can also be used to interpolate vectors in such a way that normalization is not necessary, which will make the interpolation very fast. The somewhat larger setup effort required by this approach can be handled through table look up techniques.
  •  
9.
  • Bergström, Ilias, et al. (author)
  • The Plausibility of a String Quartet Performance in Virtual Reality
  • 2017
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 23:4, s. 1332-1339
  • Journal article (peer-reviewed)abstract
    • We describe an experiment that explores the contribution of auditory and other features to the illusion of plausibility in a virtual environment that depicts the performance of a string quartet. 'Plausibility' refers to the component of presence that is the illusion that the perceived events in the virtual environment are really happening. The features studied were: Gaze (the musicians ignored the participant, the musicians sometimes looked towards and followed the participant's movements), Sound Spatialization (Mono, Stereo, Spatial), Auralization (no sound reflections, reflections corresponding to a room larger than the one perceived, reflections that exactly matched the virtual room), and Environment (no sound from outside of the room, birdsong and wind corresponding to the outside scene). We adopted the methodology based on color matching theory, where 20 participants were first able to assess their feeling of plausibility in the environment with each of the four features at their highest setting. Then five times participants started from a low setting on all features and were able to make transitions from one system configuration to another until they matched their original feeling of plausibility. From these transitions a Markov transition matrix was constructed, and also probabilities of a match conditional on feature configuration. The results show that Environment and Gaze were individually the most important factors influencing the level of plausibility. The highest probability transitions were to improve Environment and Gaze, and then Auralization and Spatialization. We present this work as both a contribution to the methodology of assessing presence without questionnaires, and showing how various aspects of a musical performance can influence plausibility.
  •  
10.
  • Bladin, Kalle, et al. (author)
  • Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization
  • 2018
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 24:1, s. 802-811
  • Journal article (peer-reviewed)abstract
    • Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes. such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters. interactive touch tables. and virtual reality headsets.
  •  
11.
  • Bock, Alexander, et al. (author)
  • Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
  • 2012
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 18:12, s. 2315-2324
  • Journal article (peer-reviewed)abstract
    • Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.
  •  
12.
  • Bock, Alexander, 1985-, et al. (author)
  • OpenSpace : A System for Astrographics
  • 2020
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 26:1, s. 633-642
  • Journal article (peer-reviewed)abstract
    • Human knowledge about the cosmos is rapidly increasing as instruments and simulations are generating new data supporting the formation of theory and understanding of the vastness and complexity of the universe. OpenSpace is a software system that takes on the mission of providing an integrated view of all these sources of data and supports interactive exploration of the known universe from the millimeter scale showing instruments on spacecrafts to billions of light years when visualizing the early universe. The ambition is to support research in astronomy and space exploration, science communication at museums and in planetariums as well as bringing exploratory astrographics to the class room. There is a multitude of challenges that need to be met in reaching this goal such as the data variety, multiple spatio-temporal scales, collaboration capabilities, etc. Furthermore, the system has to be flexible and modular to enable rapid prototyping and inclusion of new research results or space mission data and thereby shorten the time from discovery to dissemination. To support the different use cases the system has to be hardware agnostic and support a range of platforms and interaction paradigms. In this paper we describe how OpenSpace meets these challenges in an open source effort that is paving the path for the next generation of interactive astrographics.
  •  
13.
  • Bock, Alexander, 1985-, et al. (author)
  • TopoAngler: Interactive Topology-Based Extraction of Fishes
  • 2018
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 24:1, s. 812-821
  • Journal article (peer-reviewed)abstract
    • We present TopoAngler, a visualization framework that enables an interactive user-guided segmentation of fishes contained in a micro-CT scan. The inherent noise in the CT scan coupled with the often disconnected (and sometimes broken) skeletal structure of fishes makes an automatic segmentation of the volume impractical. To overcome this, our framework combines techniques from computational topology with an interactive visual interface, enabling the human-in-the-Ioop to effectively extract fishes from the volume. In the first step, the join tree of the input is used to create a hierarchical segmentation of the volume. Through the use of linked views, the visual interface then allows users to interactively explore this hierarchy, and gather parts of individual fishes into a coherent sub-volume, thus reconstructing entire fishes. Our framework was primarily developed for its application to CT scans of fishes, generated as part of the ScanAllFish project, through close collaboration with their lead scientist. However, we expect it to also be applicable in other biological applications where a single dataset contains multiple specimen; a common routine that is now widely followed in laboratories to increase throughput of expensive CT scanners.
  •  
14.
  • Bodin, Kenneth, et al. (author)
  • Constraint Fluids
  • 2012
  • In: IEEE Transactions on Visualization and Computer Graphics. - Los Alamitos, USA : IEEE Computer Society. - 1077-2626 .- 1941-0506. ; 18:3, s. 516-526
  • Journal article (peer-reviewed)abstract
    • We present a fluid simulation method whereincompressibility is enforced through a holonomic constrainton the mass density. The method starts in aLagrangian particle formulation where the mass densityand other field quantities are represented by SmoothedParticle Hydrodynamics (SPH) kernel approximations.The density constraint is formulated as a regularizedmanybody constraint and is equivalent to very highsound speed. The system is integrated using a variationaldiscrete-time scheme, SPOOK, that includesconstraint regularization and stabilization. This constraintformulation of SPH enables systematic multiphysicsintegration, between rigid multibody physicsand fluids, where buoyancy falls out naturally. The fluidmodel results in a linear system of equations, whilemore general multiphysics systems result in a mixedlinear complementarity problem (MLCP) and we solvethese using iterative methods. The results demonstratenear perfect incompressibility, vastly improved stability,allowing for large time steps, and two orders of magnitudeimproved computational performance. Proof ofconcept is given for computer graphics applications andinteractive simulations.
  •  
15.
  • Brodersen, Anders, et al. (author)
  • Geometric Texturing Using Level Sets
  • 2008
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 14:2, s. 277-288
  • Journal article (peer-reviewed)abstract
    • We present techniques for warping and blending (or subtracting) geometric textures onto surfaces represented by high resolution level sets. The geometric texture itself can be represented either explicitly as a polygonal mesh or implicitly as a level set. Unlike previous approaches, we can produce topologically connected surfaces with smooth blending and low distortion. Specifically, we offer two different solutions to the problem of adding fine-scale geometric detail to surfaces. Both solutions assume a level set representation of the base surface which is easily achieved by means of a mesh-to-level-set scan conversion. To facilitate our mapping, we parameterize the embedding space of the base level set surface using fast particle advection. We can then warp explicit texture meshes onto this surface at nearly interactive speeds or blend level set representations of the texture to produce high-quality surfaces with smooth transitions.
  •  
16.
  • Bruckner, Stefan, et al. (author)
  • A Model of Spatial Directness in Interactive Visualization
  • 2018
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506.
  • Journal article (peer-reviewed)abstract
    • We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.
  •  
17.
  • Bujack, Roxana, et al. (author)
  • Moment Invariants for 2D Flow Fields Using Normalization in Detail
  • 2015
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 21:8, s. 916-929
  • Journal article (peer-reviewed)abstract
    • The analysis of 2D flow data is often guided by the search for characteristic structures with semantic meaning. One way to approach this question is to identify structures of interest by a human observer, with the goal of finding similar structures in the same or other datasets. The major challenges related to this task are to specify the notion of similarity and define respective pattern descriptors. While the descriptors should be invariant to certain transformations, such as rotation and scaling, they should provide a similarity measure with respect to other transformations, such as deformations. In this paper, we propose to use moment invariants as pattern descriptors for flow fields. Moment invariants are one of the most popular techniques for the description of objects in the field of image recognition. They have recently also been applied to identify 2D vector patterns limited to the directional properties of flow fields. Moreover, we discuss which transformations should be considered for the application to flow analysis. In contrast to previous work, we follow the intuitive approach of moment normalization, which results in a complete and independent set of translation, rotation, and scaling invariant flow field descriptors. They also allow to distinguish flow features with different velocity profiles. We apply the moment invariants in a pattern recognition algorithm to a real world dataset and show that the theoretical results can be extended to discrete functions in a robust way.
  •  
18.
  • Capannini, Gabriele, et al. (author)
  • Adaptive Collision Culling for Massive Simulations by a Parallel and Context-Aware Sweep and Prune Algorithm
  • 2018
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 4:7, s. 2064-2077
  • Journal article (peer-reviewed)abstract
    • We present an improved parallel Sweep and Prune algorithm that solves the dynamic box intersection problem in three dimensions. It scales up to very large datasets, which makes it suitable for broad phase collision detection in complex moving body simulations. Our algorithm gracefully handles high-density scenarios, including challenging clustering behavior, by using a double-axis sweeping approach and a cache-friendly succinct data structure. The algorithm is realized by three parallel stages for sorting, candidate generation, and object pairing. By the use of temporal coherence, our sorting stage runs with close to optimal load balancing. Furthermore, our approach is characterized by a work-division strategy that relies on adaptive partitioning, which leads to almost ideal scalability. In addition, for scenarios that involves intense clustering along several axes simultaneously, we propose an enhancement that increases the context-awareness of the algorithm. By exploiting information gathered along three orthogonal axes, an efficient choice of what range query to perform can be made per object during run-time. Experimental results show high performance for up to millions of objects on modern multi-core CPUs.
  •  
19.
  • Chatzimparmpas, Angelos, 1994-, et al. (author)
  • FeatureEnVi : Visual Analytics for Feature Engineering Using Stepwise Selection and Semi-Automatic Extraction Approaches
  • 2022
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 28:4, s. 1773-1791
  • Journal article (peer-reviewed)abstract
    • The machine learning (ML) life cycle involves a series of iterative steps, from the effective gathering and preparation of the data—including complex feature engineering processes—to the presentation and improvement of results, with various algorithms to choose from in every step. Feature engineering in particular can be very beneficial for ML, leading to numerous improvements such as boosting the predictive results, decreasing computational times, reducing excessive noise, and increasing the transparency behind the decisions taken during the training. Despite that, while several visual analytics tools exist to monitor and control the different stages of the ML life cycle (especially those related to data and algorithms), feature engineering support remains inadequate. In this paper, we present FeatureEnVi, a visual analytics system specifically designed to assist with the feature engineering process. Our proposed system helps users to choose the most important feature, to transform the original features into powerful alternatives, and to experiment with different feature generation combinations. Additionally, data space slicing allows users to explore the impact of features on both local and global scales. FeatureEnVi utilizes multiple automatic feature selection techniques; furthermore, it visually guides users with statistical evidence about the influence of each feature (or subsets of features). The final outcome is the extraction of heavily engineered features, evaluated by multiple validation metrics. The usefulness and applicability of FeatureEnVi are demonstrated with two use cases and a case study. We also report feedback from interviews with two ML experts and a visualization researcher who assessed the effectiveness of our system.
  •  
20.
  • Chatzimparmpas, Angelos, 1994-, et al. (author)
  • StackGenVis : Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics
  • 2021
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE Computer Society Digital Library. - 1077-2626 .- 1941-0506. ; 27:2, s. 1547-1557
  • Journal article (peer-reviewed)abstract
    • In machine learning (ML), ensemble methods—such as bagging, boosting, and stacking—are widely-established approaches that regularly achieve top-notch predictive performance. Stacking (also called "stacked generalization") is an ensemble method that combines heterogeneous base models, arranged in at least one layer, and then employs another metamodel to summarize the predictions of those models. Although it may be a highly-effective approach for increasing the predictive performance of ML, generating a stack of models from scratch can be a cumbersome trial-and-error process. This challenge stems from the enormous space of available solutions, with different sets of data instances and features that could be used for training, several algorithms to choose from, and instantiations of these algorithms using diverse parameters (i.e., models) that perform differently according to various metrics. In this work, we present a knowledge generation model, which supports ensemble learning with the use of visualization, and a visual analytics system for stacked generalization. Our system, StackGenVis, assists users in dynamically adapting performance metrics, managing data instances, selecting the most important features for a given data set, choosing a set of top-performant and diverse algorithms, and measuring the predictive performance. In consequence, our proposed tool helps users to decide between distinct models and to reduce the complexity of the resulting stack by removing overpromising and underperforming models. The applicability and effectiveness of StackGenVis are demonstrated with two use cases: a real-world healthcare data set and a collection of data related to sentiment/stance detection in texts. Finally, the tool has been evaluated through interviews with three ML experts.
  •  
21.
  • Chatzimparmpas, Angelos, 1994-, et al. (author)
  • t-viSNE : Interactive Assessment and Interpretation of t-SNE Projections
  • 2020
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 26:8, s. 2696-2714
  • Journal article (peer-reviewed)abstract
    • t-Distributed Stochastic Neighbor Embedding (t-SNE) for the visualization of multidimensional data has proven to be a popular approach, with successful applications in a wide range of domains. Despite their usefulness, t-SNE projections can be hard to interpret or even misleading, which hurts the trustworthiness of the results. Understanding the details of t-SNE itself and the reasons behind specific patterns in its output may be a daunting task, especially for non-experts in dimensionality reduction. In this work, we present t-viSNE, an interactive tool for the visual exploration of t-SNE projections that enables analysts to inspect different aspects of their accuracy and meaning, such as the effects of hyper-parameters, distance and neighborhood preservation, densities and costs of specific neighborhoods, and the correlations between dimensions and visual patterns. We propose a coherent, accessible, and well-integrated collection of different views for the visualization of t-SNE projections. The applicability and usability of t-viSNE are demonstrated through hypothetical usage scenarios with real data sets. Finally, we present the results of a user study where the tool’s effectiveness was evaluated. By bringing to light information that would normally be lost after running t-SNE, we hope to support analysts in using t-SNE and making its results better understandable.
  •  
22.
  •  
23.
  • Costa, Jonathas, et al. (author)
  • Interactive Visualization of Atmospheric Effects for Celestial Bodies
  • 2021
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 27:2, s. 785-795
  • Journal article (peer-reviewed)abstract
    • We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earths case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earths atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets.
  •  
24.
  • Dewe, Hayley, et al. (author)
  • My Virtual Self : The Role of Movement in Children's Sense of Embodiment
  • 2022
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 28:12, s. 4061-4072
  • Journal article (peer-reviewed)abstract
    • There are vast potential applications for children's entertainment and education with modern virtual reality (VR) experiences, yet we know very little about how the movement or form of such a virtual body can influence children's feelings of control (agency) or the sensation that they own the virtual body (ownership). In two experiments, we gave a total of 197 children aged 4-14 years a virtual hand which moved synchronously or asynchronously with their own movements and had them interact with a VR environment. We found that movement synchrony influenced feelings of control and ownership at all ages. In Experiment 1 only, participants additionally felt haptic feedback either congruently, delayed or not at all - this did not influence feelings of control or ownership. In Experiment 2 only, participants used either a virtual hand or non-human virtual block. Participants embodied both forms to some degree, provided visuomotor signals were synchronous (as indicated by ownership, agency, and location ratings). Yet, only the hand in the synchronous movement condition was described as feeling like part of the body, rather than like a tool (e.g., a mouse or controller). Collectively, these findings highlight the overall dominance of visuomotor synchrony for children's own-body representation; that children can embody non-human forms to some degree; and that embodiment is also somewhat constrained by prior expectations of body form.
  •  
25.
  • Dolonius, Dan, 1985, et al. (author)
  • Compressing color data for voxelized surface geometry
  • 2019
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1941-0506 .- 1077-2626. ; 25:2, s. 1270-1282
  • Journal article (peer-reviewed)abstract
    • We explore the problem of decoupling color information from geometry in large scenes of voxelized surfaces and of compressing the array of colors without introducing disturbing artifacts. In this extension of our I3D paper with the same title [1] , we first present a novel method for connecting each node in a sparse voxel DAG to its corresponding colors in a separate 1D array of colors, with very little additional information stored to the DAG. Then, we show that by mapping the 1D array of colors onto a 2D image using a space-filling curve, we can achieve high compression rates and good quality using conventional, modern, hardware-accelerated texture compression formats such as ASTC or BC7. We additionally explore whether this method can be used to compress voxel colors for off-line storage and network transmission using conventional off-line compression formats such as JPG and JPG2K. For real-time decompression, we suggest a novel variable bitrate block encoding that consistently outperforms previous work, often achieving two times the compression at equal quality.
  •  
26.
  • Domova, Veronika, et al. (author)
  • A Model for Types and Levels of Automation in Visual Analytics: A Survey, a Taxonomy, and Examples
  • 2023
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 29:8, s. 3550-3568
  • Journal article (peer-reviewed)abstract
    • The continuous growth in availability and access to data presents a major challenge to the human analyst. As the manual analysis of large and complex datasets is nowadays practically impossible, the need for assisting tools that can automate the analysis process while keeping the human analyst in the loop is imperative. A large and growing body of literature recognizes the crucial role of automation in Visual Analytics and suggests that automation is among the most important constituents for effective Visual Analytics systems. Today, however, there is no appropriate taxonomy nor terminology for assessing the extent of automation in a Visual Analytics system. In this article, we aim to address this gap by introducing a model of levels of automation tailored for the Visual Analytics domain. The consistent terminology of the proposed taxonomy could provide a ground for users/readers/reviewers to describe and compare automation in Visual Analytics systems. Our taxonomy is grounded on a combination of several existing and well-established taxonomies of levels of automation in the human-machine interaction domain and relevant models within the visual analytics field. To exemplify the proposed taxonomy, we selected a set of existing systems from the event-sequence analytics domain and mapped the automation of their visual analytics process stages against the automation levels in our taxonomy.
  •  
27.
  • Duran Rosich, David, et al. (author)
  • Visualization of Large Molecular Trajectories
  • 2018
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506.
  • Journal article (peer-reviewed)abstract
    • The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.
  •  
28.
  •  
29.
  • Elmqvist, Niklas, 1977, et al. (author)
  • A Taxonomy of 3D Occlusion Management for Visualization
  • 2008
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1941-0506 .- 1077-2626. ; 14:5, s. 1095-1109
  • Journal article (peer-reviewed)abstract
    • While an important factor in depth perception, the occlusion effect in 3D environments also has a detrimental impact on tasks involving discovery, access, and spatial relation of objects in a 3D visualization. A number of interactive techniques have been developed in recent years to directly or indirectly deal with this problem using a wide range of different approaches. In this paper, we build on previous work on mapping out the problem space of 3D occlusion by defining a taxonomy of the design space of occlusion management techniques in an effort to formalize a common terminology and theoretical framework for this class of interactions. We classify a total of 50 different techniques for occlusion management using our taxonomy and then go on to analyze the results, deriving a set of five orthogonal design patterns for effective reduction of 3D occlusion. We also discuss the "gaps" in the design space, areas of the taxonomy not yet populated with existing techniques, and use these to suggest future research directions into occlusion management.
  •  
30.
  • Enderton, E., et al. (author)
  • Stochastic Transparency
  • 2011
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1941-0506 .- 1077-2626. ; 17:8, s. 1036-1047
  • Journal article (peer-reviewed)abstract
    • Stochastic transparency provides a unified approach to order-independent transparency, antialiasing, and deep shadow maps. It augments screen-door transparency using a random sub-pixel stipple pattern, where each fragment of transparent geometry covers a random subset of pixel samples of size proportional to alpha. This results in correct alpha-blended colors on average, in a single render pass with fixed memory size and no sorting, but introduces noise. We reduce this noise by an alpha correction pass, and by an accumulation pass that uses a stochastic shadow map from the camera. At the pixel level, the algorithm does not branch and contains no read-modify-write loops, other than traditional z-buffer blend operations. This makes it an excellent match for modern massively parallel GPU hardware. Stochastic transparency is very simple to implement and supports all types of transparent geometry, able without coding for special cases to mix hair, smoke, foliage, windows, and transparent cloth in a single scene.
  •  
31.
  • Engel, Dominik, et al. (author)
  • Deep Volumetric Ambient Occlusion
  • 2021
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 27:2, s. 1268-1278
  • Journal article (peer-reviewed)abstract
    • We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAOs ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.
  •  
32.
  • Espadoto, Mateus, et al. (author)
  • Toward a Quantitative Survey of Dimension Reduction Techniques
  • 2021
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 27:3, s. 2153-2173
  • Journal article (peer-reviewed)abstract
    • Dimensionality reduction methods, also known as projections, are frequently used in multidimeDimensionality reduction methods, also known as projections, are frequently used in multidimensional data exploration in machine learning, data science, and information visualization. Tens of such techniques have been proposed, aiming to address a wide set of requirements, such as ability to show the high-dimensional data structure, distance or neighborhood preservation, computational scalability, stability to data noise and/or outliers, and practical ease of use. However, it is far from clear for practitioners how to choose the best technique for a given use context. We present a survey of a wide body of projection techniques that helps answering this question. For this, we characterize the input data space, projection techniques, and the quality of projections, by several quantitative metrics. We sample these three spaces according to these metrics, aiming at good coverage with bounded effort. We describe our measurements and outline observed dependencies of the measured variables. Based on these results, we draw several conclusions that help comparing projection techniques, explain their results for different types of data, and ultimately help practitioners when choosing a projection for a given context. Our methodology, datasets, projection implementations, metrics, visualizations, and results are publicly open, so interested stakeholders can examine and/or extend this benchmark.nsional data exploration in machine learning, data science, and information visualization. Tens of such techniques have been proposed, aiming to address a wide set of requirements, such as ability to show the high-dimensional data structure, distance or neighborhood preservation, computational scalability, stability to data noise and/or outliers, and practical ease of use. However, it is far from clear for practitioners how to choose the best technique for a given use context. We present a survey of a wide body of projection techniques that helps answering this question. For this, we characterize the input data space, projection techniques, and the quality of projections, by several quantitative metrics. We sample these three spaces according to these metrics, aiming at good coverage with bounded effort. We describe our measurements and outline observed dependencies of the measured variables. Based on these results, we draw several conclusions that help comparing projection techniques, explain their results for different types of data, and ultimately help practitioners when choosing a projection for a given context. Our methodology, datasets, projection implementations, metrics, visualizations, and results are publicly open, so interested stakeholders can examine and/or extend this benchmark.
  •  
33.
  • Etiene, Tiago, et al. (author)
  • Verifying Volume Rendering Using Discretization Error Analysis
  • 2014
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 20:1, s. 140-154
  • Journal article (peer-reviewed)abstract
    • We propose an approach for verification of volume rendering correctness based on an analysis of the volume rendering integral, the basis of most DVR algorithms. With respect to the most common discretization of this continuous model (Riemann summation), we make assumptions about the impact of parameter changes on the rendered results and derive convergence curves describing the expected behavior. Specifically, we progressively refine the number of samples along the ray, the grid size, and the pixel size, and evaluate how the errors observed during refinement compare against the expected approximation errors. We derive the theoretical foundations of our verification approach, explain how to realize it in practice, and discuss its limitations. We also report the errors identified by our approach when applied to two publicly available volume rendering packages.
  •  
34.
  • Falk, Martin, Dr.rer.nat. 1981-, et al. (author)
  • A Visual Environment for Data Driven Protein Modeling and Validation
  • 2023
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506.
  • Journal article (peer-reviewed)abstract
    • In structural biology, validation and verification of new atomic models are crucial and necessary steps which limit the production of reliable molecular models for publications and databases. An atomic model is the result of meticulous modeling and matching and is evaluated using a variety of metrics that provide clues to improve and refine the model so it fits our understanding of molecules and physical constraints. In cryo electron microscopy (cryo-EM) the validation is also part of an iterative modeling process in which there is a need to judge the quality of the model during the creation phase. A shortcoming is that the process and results of the validation are rarely communicated using visual metaphors.This work presents a visual framework for molecular validation. The framework was developed in close collaboration with domain experts in a participatory design process. Its core is a novel visual representation based on 2D heatmaps that shows all available validation metrics in a linear fashion, presenting a global overview of the atomic model and provide domain experts with interactive analysis tools. Additional information stemming from the underlying data, such as a variety of local quality measures, is used to guide the user's attention toward regions of higher relevance. Linked with the heatmap is a three-dimensional molecular visualization providing the spatial context of the structures and chosen metrics. Additional views of statistical properties of the structure are included in the visual framework. We demonstrate the utility of the framework and its visual guidance with examples from cryo-EM.
  •  
35.
  • Falk, Martin, et al. (author)
  • Output-Sensitive 3D Line Integral Convolution
  • 2008
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE Computer Society. - 1077-2626 .- 1941-0506. ; 14:4, s. 820-834
  • Journal article (peer-reviewed)abstract
    • We propose a largely output-sensitive visualization method for 3D line integral convolution (LIC) whose rendering speed is mainly independent of the data set size and mostly governed by the complexity of the output on the image plane. Our approach of view-dependent visualization tightly links the LIC generation with the volume rendering of the LIC result in order to avoid the computation of unnecessary LIC points: early-ray termination and empty-space leaping techniques are used to skip the computation of the LIC integral in a lazy-evaluation approach; both ray casting and texture slicing can be used as volume-rendering techniques. The input noise is modeled in object space to allow for temporal coherence under object and camera motion. Different noise models are discussed, covering dense representations based on filtered white noise all the way to sparse representations similar to oriented LIC. Aliasing artifacts are avoided by frequency control over the 3D noise and by employing a 3D variant of MlPmapping. A range of illumination models is applied to the LIC streamlines: different codimension-2 lighting models and a novel gradient-based illumination model that relies on precomputed gradients and does not require any direct calculation of gradients after the LIC integral is evaluated. We discuss the issue of proper sampling of the LIC and volume-rendering integrals by employing a frequency-space analysis of the noise model and the precomputed gradients. Finally, we demonstrate that our visualization approach lends itself to a fast graphics processing unit (GPU) implementation that supports both steady and unsteady flow. Therefore, this 3D LIC method allows users to interactively explore 3D flow by means of high-quality, view-dependent, and adaptive LIC volume visualization. Applications to flow visualization in combination with feature extraction and focus-and-context visualization are described, a comparison to previous methods is provided, and a detailed performance analysis is included.
  •  
36.
  • Feng, Louis, et al. (author)
  • Anisotropic Noise Samples
  • 2008
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 14:2, s. 342-354
  • Journal article (peer-reviewed)abstract
    • We present a practical approach to generate stochastic anisotropic samples with Poisson-disk characteristic over a two-dimensional domain. In contrast to isotropic samples, we understand anisotropic samples as non-overlapping ellipses whose size and density match a given anisotropic metric. Anisotropic noise samples are useful for many visualization and graphics applications. The spot samples can be used as input for texture generation, e.g., line integral convolution (LIC), but can also be used directly for visualization. The definition of the spot samples using a metric tensor makes them especially suitable for the visualization of tensor fields that can be translated into a metric. Our work combines ideas from sampling theory and mesh generation. To generate these samples with the desired properties we construct a first set of non-overlapping ellipses whose distribution closely matches the underlying metric. This set of samples is used as input for a generalized anisotropic Lloyd relaxation to distribute noise samples more evenly. Instead of computing the Voronoi tessellation explicitly, we introduce a discrete approach which combines the Voronoi cell and centroid computation in one step. Our method supports automatic packing of the elliptical samples, resulting in textures similar to those generated by anisotropic reaction-diffusion methods. We use Fourier analysis tools for quality measurement of uniformly distributed samples. The resulting samples have nice sampling properties, for example, they satisfy a blue noise property where low frequencies in the power spectrum are reduced to a minimum.
  •  
37.
  • Fernstad, Sara Johansson, et al. (author)
  • To Explore What Isnt There-Glyph-Based Visualization for Analysis of Missing Values
  • 2022
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 28:10, s. 3513-3529
  • Journal article (peer-reviewed)abstract
    • This article contributes a novel visualization method, Missingness Glyph, for analysis and exploration of missing values in data. Missing values are a common challenge in most data generating domains and may cause a range of analysis issues. Missingness in data may indicate potential problems in data collection and pre-processing, or highlight important data characteristics. While the development and improvement of statistical methods for dealing with missing data is a research area in its own right, mainly focussing on replacing missing values with estimated values, considerably less focus has been put on visualization of missing values. Nonetheless, visualization and explorative analysis has great potential to support understanding of missingness in data, and to enable gaining of novel insights into patterns of missingness in a way that statistical methods are unable to. The Missingness Glyph supports identification of relevant missingness patterns in data, and is evaluated and compared to two other visualization methods in context of the missingness patterns. The results are promising and confirms that the Missingness Glyph in several cases perform better than the alternative visualization methods.
  •  
38.
  • Feyer, Stefan P., et al. (author)
  • 2D, 2.5D, or 3D? : An Exploratory Study on Multilayer Network Visualisations in Virtual Reality
  • 2024
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 30:1, s. 469-479
  • Journal article (peer-reviewed)abstract
    • Relational information between different types of entities is often modelled by a multilayer network (MLN) - a network with subnetworks represented by layers. The layers of an MLN can be arranged in different ways in a visual representation, however, the impact of the arrangement on the readability of the network is an open question. Therefore, we studied this impact for several commonly occurring tasks related to MLN analysis. Additionally, layer arrangements with a dimensionality beyond 2D, which are common in this scenario, motivate the use of stereoscopic displays. We ran a human subject study utilising a Virtual Reality headset to evaluate 2D, 2.5D, and 3D layer arrangements. The study employs six analysis tasks that cover the spectrum of an MLN task taxonomy, from path finding and pattern identification to comparisons between and across layers. We found no clear overall winner. However, we explore the task-to-arrangement space and derive empirical-based recommendations on the effective use of 2D, 2.5D, and 3D layer arrangements for MLNs.
  •  
39.
  • Funck, W. von, et al. (author)
  • Smoke Surfaces : An Interactive Flow Visualization Technique Inspired by Real-World Flow Experiments
  • 2008
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 14:6, s. 1396-1403
  • Journal article (peer-reviewed)abstract
    • Smoke rendering is a standard technique for flow visualization. Most approaches are based on a volumetric, particle based, or image based representation of the smoke. This paper introduces an alternative representation of smoke structures: as semi-transparent streak surfaces. In order to make streak surface integration fast enough for interactive applications, we avoid expensive adaptive retriangulations by coupling the opacity of the triangles to their shapes. This way, the surface shows a smoke-like look even in rather turbulent areas. Furthermore, we show modifications of the approach to mimic smoke nozzles, wool tufts, and time surfaces. The technique is applied to a number of test data sets.
  •  
40.
  • Gimenez, Alfredo, et al. (author)
  • MemAxes : Visualization and Analytics for Characterizing Complex Memory Performance Behaviors
  • 2018
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 24:7, s. 2180-2193
  • Journal article (peer-reviewed)abstract
    • Memory performance is often a major bottleneck for high-performance computing (HPC) applications. Deepening memory hierarchies, complex memory management, and non-uniform access times have made memory performance behavior difficult to characterize, and users require novel, sophisticated tools to analyze and optimize this aspect of their codes. Existing tools target only specific factors of memory performance, such as hardware layout, allocations, or access instructions. However, today's tools do not suffice to characterize the complex relationships between these factors. Further, they require advanced expertise to be used effectively. We present MemAxes, a tool based on a novel approach for analytic-driven visualization of memory performance data. MemAxes uniquely allows users to analyze the different aspects related to memory performance by providing multiple visual contexts for a centralized dataset. We define mappings of sampled memory access data to new and existing visual metaphors, each of which enabling a user to perform different analysis tasks. We present methods to guide user interaction by scoring subsets of the data based on known performance problems. This scoring is used to provide visual cues and automatically extract clusters of interest. We designed MemAxes in collaboration with experts in HPC and demonstrate its effectiveness in case studies.
  •  
41.
  • Günther, D., et al. (author)
  • Fast and Memory-Efficient Topological Denoising of 2D and 3D Scalar Fields
  • 2014
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE Computer Society. - 1077-2626 .- 1941-0506. ; 20:12, s. 2585-2594
  • Journal article (peer-reviewed)abstract
    • Data acquisition, numerical inaccuracies, and sampling often introduce noise in measurements and simulations. Removing this noise is often necessary for efficient analysis and visualization of this data, yet many denoising techniques change the minima and maxima of a scalar field. For example, the extrema can appear or disappear, spatially move, and change their value. This can lead to wrong interpretations of the data, e.g., when the maximum temperature over an area is falsely reported being a few degrees cooler because the denoising method is unaware of these features. Recently, a topological denoising technique based on a global energy optimization was proposed, which allows the topology-controlled denoising of 2D scalar fields. While this method preserves the minima and maxima, it is constrained by the size of the data. We extend this work to large 2D data and medium-sized 3D data by introducing a novel domain decomposition approach. It allows processing small patches of the domain independently while still avoiding the introduction of new critical points. Furthermore, we propose an iterative refinement of the solution, which decreases the optimization energy compared to the previous approach and therefore gives smoother results that are closer to the input. We illustrate our technique on synthetic and real-world 2D and 3D data sets that highlight potential applications.
  •  
42.
  • Heiberg, Einar, 1973-, et al. (author)
  • Three-dimensional flow characterization using vector pattern matching
  • 2003
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1077-2626 .- 1941-0506. ; 9:3, s. 313-319
  • Journal article (peer-reviewed)abstract
    • This paper describes a novel method for regional characterization of three-dimensional vector fields using a pattern matching approach. Given a three-dimensional vector field, the goal is to automatically locate, identify, and visualize a selected set of classes of structures or features. Rather than analytically defining the properties that must be fulfilled in a region in order to be classified as a specific structure, a set of idealized patterns for each structure type is constructed. Similarity to these patterns is then defined and calculated. Examples of structures of interest include vortices, swirling flow, diverging or converging flow, and parallel flow. Both medical and aerodynamic applications are presented in this paper.
  •  
43.
  • Helgeland, A., et al. (author)
  • Visualization of vorticity and vortices in wall-bounded turbulent flows
  • 2007
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1941-0506 .- 1077-2626. ; 13:5, s. 1055-1066
  • Journal article (peer-reviewed)abstract
    • This study was initiated by the scientifically interesting prospect of applying advanced visualization techniques to gain further insight into various spatio-temporal characteristics of turbulent flows. The ability to study complex kinematical and dynamical features of turbulence provides means of extracting the underlying physics of turbulent fluid motion. The objective is to analyze the use of a vorticity field line approach to study numerically generated incompressible turbulent flows. In order to study the vorticity field, we present a field line animation technique that uses a specialized particle advection and seeding strategy. Efficient analysis is achieved by decoupling the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. Multifield visualizations are obtained using a flexible volume rendering framework, which is presented in this paper. Vorticity field lines have been employed as indicators to provide a means to identify "ejection" and "sweep" regions, two particularly important spatio-temporal events in wall-bounded turbulent flows. Their relation to the rate of turbulent kinetic energy production and viscous dissipation, respectively, has been identified.
  •  
44.
  • Helske, Jouni, et al. (author)
  • Can Visualization Alleviate Dichotomous Thinking? : Effects of Visual Representations on the Cliff Effect
  • 2021
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 27:8, s. 3397-3409
  • Journal article (peer-reviewed)abstract
    • Common reporting styles for statistical results in scientific articles, such as p-values and confidence intervals (CI), have been reported to be prone to dichotomous interpretations, especially with respect to the null hypothesis significance testing framework. For example when the p-value is small enough or the CIs of the mean effects of a studied drug and a placebo are not overlapping, scientists tend to claim significant differences while often disregarding the magnitudes and absolute differences in the effect sizes. This type of reasoning has been shown to be potentially harmful to science. Techniques relying on the visual estimation of the strength of evidence have been recommended to reduce such dichotomous interpretations but their effectiveness has also been challenged. We ran two experiments on researchers with expertise in statistical analysis to compare several alternative representations of confidence intervals and used Bayesian multilevel models to estimate the effects of the representation styles on differences in researchers subjective confidence in the results. We also asked the respondents opinions and preferences in representation styles. Our results suggest that adding visual information to classic CI representation can decrease the tendency towards dichotomous interpretations - measured as the cliff effect: the sudden drop in confidence around p-value 0.05 - compared with classic CI visualization and textual representation of the CI with p-values. All data and analyses are publicly available at https://github.com/helske/statvis.
  •  
45.
  • Hernell, Frida, et al. (author)
  • Local Ambient Occlusion in Direct Volume Rendering
  • 2010
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 16:4, s. 548-559
  • Journal article (peer-reviewed)abstract
    • This paper presents a novel technique to efficiently compute illumination for Direct Volume Rendering using a local approximation of ambient occlusion to integrate the intensity of incident light for each voxel. An advantage with this local approach is that fully shadowed regions are avoided, a desirable feature in many applications of volume rendering such as medical visualization. Additional transfer function interactions are also presented, for instance, to highlight specific structures with luminous tissue effects and create an improved context for semitransparent tissues with a separate absorption control for the illumination settings. Multiresolution volume management and GPU-based computation are used to accelerate the calculations and support large data sets. The scheme yields interactive frame rates with an adaptive sampling approach for incrementally refined illumination under arbitrary transfer function changes. The illumination effects can give a better understanding of the shape and density of tissues and so has the potential to increase the diagnostic value of medical volume rendering. Since the proposed method is gradient-free, it is especially beneficial at the borders of clip planes, where gradients are undefined, and for noisy data sets.
  •  
46.
  • Hilasaca, Gladys M., et al. (author)
  • A Grid-based Method for Removing Overlaps of Dimensionality Reduction Scatterplot Layouts
  • 2024
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE. - 1077-2626 .- 1941-0506. ; 30:8, s. 5733-5749
  • Journal article (peer-reviewed)abstract
    • Dimensionality Reduction (DR) scatterplot layouts have become a ubiquitous visualization tool for analyzing multidimensional datasets. Despite their popularity, such scatterplots suffer from occlusion, especially when informative glyphs are used to represent data instances, potentially obfuscating critical information for the analysis under execution. Different strategies have been devised to address this issue, either producing overlap-free layouts that lack the powerful capabilities of contemporary DR techniques in uncovering interesting data patterns or eliminating overlaps as a post-processing strategy. Despite the good results of post-processing techniques, most of the best methods typically expand or distort the scatterplot area, thus reducing glyphs’ size (sometimes) to unreadable dimensions, defeating the purpose of removing overlaps. This article presents Distance Grid (DGrid) , a novel post-processing strategy to remove overlaps from DR layouts that faithfully preserves the original layout's characteristics and bounds the minimum glyph sizes. We show that DGrid surpasses the state-of-the-art in overlap removal (through an extensive comparative evaluation considering multiple different metrics) while also being one of the fastest techniques, especially for large datasets. A user study with 51 participants also shows that DGrid is consistently ranked among the top techniques for preserving the original scatterplots’ visual characteristics and the aesthetics of the final results.
  •  
47.
  • Hoshikawa, Yukai, et al. (author)
  • RedirectedDoors+: Door-Opening Redirection with Dynamic Haptics in Room-Scale VR
  • 2024
  • In: IEEE Transactions on Visualization and Computer Graphics. - 1941-0506 .- 1077-2626. ; 30:5, s. 2276-2286
  • Journal article (peer-reviewed)abstract
    • RedirectedDoors is a visuo-haptic door-opening redirection technique in VR, and it has shown promise in its ability to efficiently compress the physical space required for a room-scale VR experience. However, its previous implementation has only supported laboratory experiments with a single door opening at a fixed location. To significantly expand this technique for room-scale VR, we have developed RedirectedDoors+, a robot-based system that permits consecutive door-opening redirection with haptics. Specifically, our system is mainly achieved with the use of three components: (1) door robots, a small number of wheeled robots equipped with a doorknob-like prop, (2) a robot-positioning algorithm that arbitrarily positions the door robots to provide the user with just-in-time haptic feedback during door opening, and (3) a user-steering algorithm that determines the redirection gain for every instance of door opening to keep the user away from the boundary of the play area. Results of simulated VR exploration in six virtual environments reveal our system's performance relative to user walking speed, paths, and number of door robots, from which we derive its usage guidelines. We then conduct a user study (N = 12) in which participants experience a walkthrough application using the actual system. The results demonstrate that the system is able to provide haptic feedback while redirecting the user within a limited play area.
  •  
48.
  • Isaacs, Katherine, et al. (author)
  • Combing the Communication Hairball: Visualizing Parallel Execution Traces using Logical Time
  • 2014
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE Press. - 1077-2626 .- 1941-0506. ; 20:12, s. 2349-2358
  • Journal article (peer-reviewed)abstract
    • With the continuous rise in complexity of modern supercomputers, optimizing the performance of large-scale parallel programs is becoming increasingly challenging. Simultaneously, the growth in scale magnifies the impact of even minor inefficiencies - potentially millions of compute hours and megawatts in power consumption can be wasted on avoidable mistakes or sub-optimal algorithms. This makes performance analysis and optimization critical elements in the software development process. One of the most common forms of performance analysis is to study execution traces, which record a history of per-process events and interprocess messages in a parallel application. Trace visualizations allow users to browse this event history and search for insights into the observed performance behavior. However, current visualizations are difficult to understand even for small process counts and do not scale gracefully beyond a few hundred processes. Organizing events in time leads to a virtually unintelligible conglomerate of interleaved events and moderately high process counts overtax even the largest display. As an alternative, we present a new trace visualization approach based on transforming the event history into logical time inferred directly from happened-before relationships. This emphasizes the code’s structural behavior, which is much more familiar to the application developer. The original timing data, or other information, is then encoded through color, leading to a more intuitive visualization. Furthermore, we use the discrete nature of logical timelines to cluster processes according to their local behavior leading to a scalable visualization of even long traces on large process counts. We demonstrate our system using two case studies on large-scale parallel codes.
  •  
49.
  • Jankowai, Jochen, 1987-, et al. (author)
  • Feature Level-Sets : Generalizing Iso-surfaces to Multi-variate Data
  • 2020
  • In: IEEE Transactions on Visualization and Computer Graphics. - : Institute of Electrical and Electronics Engineers (IEEE). - 1077-2626 .- 1941-0506. ; 26:2, s. 1308-1319
  • Journal article (peer-reviewed)abstract
    • Iso-surfaces or level-sets provide an effective and frequently used means for feature visualization. However, they are restricted to simple features for uni-variate data. The approach does not scale when moving to multi-variate data or when considering more complex feature definitions. In this paper, we introduce the concept of traits and feature level-sets, which can be understood as a generalization of level-sets as it includes iso-surfaces, and fiber surfaces as special cases. The concept is applicable to a large class of traits defined as subsets in attribute space, which can be arbitrary combinations of points, lines, surfaces and volumes.  It is implemented into a system that provides an interface to define traits in an interactive way and multiple rendering options. We demonstrate the effectiveness of the approach using multi-variate data sets of different nature, including vector and tensor data, from different application domains.
  •  
50.
  • Johansson, Jimmy, et al. (author)
  • Evaluation of Parallel Coordinates: Overview, Categorization and Guidelines for Future Research
  • 2016
  • In: IEEE Transactions on Visualization and Computer Graphics. - : IEEE COMPUTER SOC. - 1077-2626 .- 1941-0506. ; 22:1, s. 579-588
  • Journal article (peer-reviewed)abstract
    • The parallel coordinates technique is widely used for the analysis of multivariate data. During recent decades significant research efforts have been devoted to exploring the applicability of the technique and to expand upon it. resulting in a variety of extensions. Of these many research activities, a surprisingly small number concerns user-centred evaluations investigating actual use and usability issues for different tasks, data and domains. The result is a clear lack of convincing evidence to support and guide uptake by users as well as future research directions. To address these issues this paper contributes a thorough literature survey of what has been done in the area of user-centred evaluation of parallel coordinates. These evaluations are divided into four categories based on characterization of use, derived from the survey. Based on the data from the survey and the categorization combined with the authors experience of working with parallel coordinates, a set of guidelines for future research directions is proposed.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-50 of 117
Type of publication
journal article (116)
conference paper (1)
Type of content
peer-reviewed (112)
other academic/artistic (5)
Author/Editor
Ynnerman, Anders (19)
Weinkauf, Tino, 1974 ... (14)
Ropinski, Timo (10)
Hotz, Ingrid (10)
Lundström, Claes, 19 ... (5)
Jönsson, Daniel (5)
show more...
Theisel, H. (5)
Ynnerman, Anders, 19 ... (5)
Kerren, Andreas, Dr. ... (5)
Seidel, H. -P (5)
Falk, Martin, Dr.rer ... (4)
Persson, Anders (4)
Johansson, Jimmy (4)
Vrotsou, Katerina, 1 ... (4)
Sintorn, Erik, 1980 (4)
Ljung, Patric, 1968- (4)
Hansen, Charles (4)
Ropinski, Timo, Prof ... (4)
Hotz, Ingrid, Profes ... (3)
Assarsson, Ulf, 1972 (3)
Forsell, Camilla (3)
Servin, Martin (3)
Silva, Claudio (3)
Emmart, Carter (3)
Bock, Alexander (3)
Lacoursière, Claude (3)
Besançon, Lonni (3)
Ljung, Patric (3)
Bock, Alexander, 198 ... (3)
Reininghaus, Jan (3)
Masood, Talha Bin (2)
Meyer, Miriah (2)
Nilsson, Susanna (2)
Bodin, Kenneth (2)
Andrienko, Gennady (2)
Andrienko, Natalia (2)
Köpp, Wiebke, 1989- (2)
Kasten, Jens (2)
Axelsson, Emil (2)
Costa, Jonathas (2)
Cooper, Matthew (2)
Wang, Bei (2)
Jonsson, Arne (2)
Dwyer, Tim (2)
Isenberg, Tobias (2)
Cooper, Matthew, 196 ... (2)
Linares, Mathieu, 19 ... (2)
Sundén, Erik (2)
Schulz, Martin (2)
Bruckner, Stefan (2)
show less...
University
Linköping University (74)
Royal Institute of Technology (19)
Linnaeus University (10)
Chalmers University of Technology (7)
Umeå University (3)
Lund University (3)
show more...
Uppsala University (2)
Blekinge Institute of Technology (2)
Stockholm University (1)
University of Gävle (1)
Mälardalen University (1)
Södertörn University (1)
University of Skövde (1)
show less...
Language
English (117)
Research subject (UKÄ/SCB)
Natural sciences (69)
Engineering and Technology (32)
Medical and Health Sciences (3)
Social Sciences (2)

Year

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view