SwePub
Sök i SwePub databas

  Utökad sökning

Träfflista för sökning "WFRF:(Bresin Roberto) "

Sökning: WFRF:(Bresin Roberto)

  • Resultat 1-50 av 180
Sortera/gruppera träfflistan
   
NumreringReferensOmslagsbildHitta
1.
  • Bresin, Roberto, et al. (författare)
  • A fuzzy approach to performance rules
  • 1995
  • Ingår i: Colloquio di Informatica Musicale - XI CIM. ; , s. 163-166
  • Konferensbidrag (refereegranskat)
  •  
2.
  •  
3.
  •  
4.
  •  
5.
  •  
6.
  •  
7.
  • Bertoni, Andrea, et al. (författare)
  • Real-time musical rhythm tapping
  • 1995
  • Ingår i: Colloquio di Informatica Musicale - XI CIM. ; , s. 185-188
  • Konferensbidrag (refereegranskat)
  •  
8.
  •  
9.
  • Bolíbar, Jordi, et al. (författare)
  • Sound feedback for the optimization of performance in running
  • 2012
  • Ingår i: TMH-QPSR special issue: Proceedings of SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden. - Stockholm. - 1104-5787. ; 52:1, s. 39-40
  • Tidskriftsartikel (refereegranskat)
  •  
10.
  •  
11.
  •  
12.
  • Bresin, Roberto, 1963-, et al. (författare)
  • A multimedia environment for interactive music performance
  • 1997
  • Ingår i: TMH-QPSR. ; 38:2-3, s. 029-032
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • We propose a music performance tool based on the Java programming language. This software runs in any Java applet viewer (i.e. a WWW browser) and interacts with the local Midi equipment by mean of a multi-task software module for Midi applications (MidiShare). Two main ideas are at the base of our project: one is to realise an easy, intuitive, hardware and software independent tool for performance, and the other is to achieve an easier development of the tool itself. At the moment there are two projects under development: a system based only on a Java applet, called Japer (Java performer), and a hybrid system based on a Java user interface and a Lisp kernel for the development of the performance tools. In this paper, the first of the two projects is presented.
  •  
13.
  •  
14.
  • Bresin, Roberto, et al. (författare)
  • Articulation strategies in expressive piano performance - Analysis of legato, staccato, and repeated notes in performances of the Andante movement of Mozart's Sonata in G major (K 545)
  • 2000
  • Ingår i: Journal of New Music Research. - : Informa UK Limited. - 0929-8215 .- 1744-5027. ; 29:3, s. 211-224
  • Tidskriftsartikel (refereegranskat)abstract
    • Articulation strategies applied by pianists in expressive performances of the same core are analysed. Measurements of key overlap time and its relation to the inter-onset-interval are collected for notes marked legato and staccato in the first sixteen bars of the Andante movement of W.A. Mozart's Piano Sonata in G major, K 545. Five pianists played the piece nine times. First, they played in a wa that they considered "optimal". In the remaining eight performances they were asked to represent different expressive characters, as specified in terms of different adjectives. Legato,staccato, and repeated notes articulation applied by the right hand were examined by means of statistical analysis. Although the results varied considerably between pianists, some trends could be observed. The pianists generally used similar strategies in the rendering intended to represent different expressive characters. legato was played with a key overlap ratio that depended on the inter-onset-interval (IOI). Staccato tones had approximate duration of 40% of the IOI. Repeated notes were played with a duration of about 60% of the IOI. The results seem useful as a basis for articulation rules in grammars for automatic piano performance.
  •  
15.
  • Bresin, Roberto (författare)
  • Artificial neural networks based models for automatic performance of musical scores
  • 1998
  • Ingår i: Journal of New Music Research. - : Informa UK Limited. - 0929-8215 .- 1744-5027. ; 27:3, s. 239-270
  • Tidskriftsartikel (refereegranskat)abstract
    • This article briefly summarises the author's research on automatic performance, started at CSC (Centro di Sonologia Computazionale, University of Padua) and continued at TMH-KTH (Speech, Music Hearing Department at the Royal Institute of Technology, Stockholm). The focus is on the evolution of the architecture of an artificial neural networks (ANNs) framework, from the first simple model, able to learn the KTH performance rules, to the final one, that accurately simulates the style of a real pianist performer, including time and loudness deviations. The task was to analyse and synthesise the performance process of a professional pianist, playing on a Disklavier. An automatic analysis extracts all performance parameters of the pianist, starting from the KTH rule system. The system possesses good generalisation properties: applying the same ANN, it is possible to perform different scores in the performing style used for the training of the networks. Brief descriptions of the program Melodia and of the two Java applets Japer and Jalisper are given in the Appendix. In Melodia, developed at the CSC, the user can run either rules or ANNs, and study their different effects. Japer and Jalisper, developed at TMH, implement in real time on the web the performance rules developed at TMH plus new features achieved by using ANNs.
  •  
16.
  • Bresin, Roberto, et al. (författare)
  • Auditory feedback through continuous control of crumpling sound synthesis
  • 2008
  • Ingår i: Proceedings of Sonic Interaction Design. - : IUAV University of Venice. - 9788890341304 ; , s. 23-28
  • Konferensbidrag (refereegranskat)abstract
    • A realtime model for the synthesis of crumpling sounds ispresented. By capturing the statistics of short sonic transients which give rise to crackling noise, it allows for a consistent description of a broad spectrum of audible physical processes which emerge in several everyday interaction contexts.The model drives a nonlinear impactor that sonifies every transient, and it can be parameterized depending on the physical attributes of the crumpling material. Three different scenarios are described, respectively simulating the foot interaction with aggregate ground materials, augmenting a dining scenario, and affecting the emotional content of a footstep sequence. Taken altogether, they emphasize the potential generalizability of the model to situations in which a precise control of auditory feedback can significantly increase the enactivity and ecological validity of an interface.
  •  
17.
  • Bresin, Roberto, et al. (författare)
  • Controlling sound production
  • 2008
  • Ingår i: Sound to Sense, Sense to Sound. - Berlin : Logos Verlag. - 9783832516000 ; , s. 447-486
  • Bokkapitel (refereegranskat)
  •  
18.
  •  
19.
  • Bresin, Roberto, et al. (författare)
  • Director musices : The KTH performance rules system
  • 2002
  • Ingår i: Proceedings of SIGMUS-46. - : Information Processing Society of Japan. ; , s. 43-48
  • Konferensbidrag (refereegranskat)abstract
    • Director Musices is a program that transforms notated scores into musical performances. It implements the performance rules emerging from research projects at the Royal Institute of Technology (KTH). Rules in the program model performance aspects such as phrasing, articulation, and intonation, and they operate on performance variables such as tone, inter-onset duration, amplitude, and pitch. By manipulating rule parameters, the user can act as a metaperformer controlling different feature of the performance, leaving the technical execution to the computer. Different interpretations of the same piece can easily be obtained. Features of Director Musices include MIDI file input and output, rule palettes, graphical display of all performance variables (along with the notation), and userdefined performance rules. The program is implemented in Common Lisp and is available free as a stand-alone application both for Macintosh and Windows platforms. Further information, including music examples, publications, and the program itself, is located online at http://www.speech.kth.se/music/performance. This paper is a revised and updated version of a previous paper published in the Computer Music Journal in year 2000 that was mainly written by Anders Friberg (Friberg, Colombo, Frydén and Sundberg, 2000). 
  •  
20.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Emotion rendering in music : Range and characteristic values of seven musical variables
  • 2011
  • Ingår i: Cortex. - : Elsevier BV. - 0010-9452 .- 1973-8102. ; 47:9, s. 1068-1081
  • Tidskriftsartikel (refereegranskat)abstract
    • Many studies on the synthesis of emotional expression in music performance have focused on the effect of individual performance variables on perceived emotional quality by making a systematical variation of variables. However, most of the studies have used a predetermined small number of levels for each variable, and the selection of these levels has often been done arbitrarily. The main aim of this research work is to improve upon existing methodologies by taking a synthesis approach. In a production experiment, 20 performers were asked to manipulate values of 7 musical variables simultaneously (tempo, sound level, articulation, phrasing, register, timbre, and attack speed) for communicating 5 different emotional expressions (neutral, happy, scary, peaceful, sad) for each of 4 scores. The scores were compositions communicating four different emotions (happiness, sadness, fear, calmness). Emotional expressions and music scores were presented in combination and in random order for each performer for a total of 5 x 4 stimuli. The experiment allowed for a systematic investigation of the interaction between emotion of each score and intended expressed emotions by performers. A two-way analysis of variance (ANOVA), repeated measures, with factors emotion and score was conducted on the participants' values separately for each of the seven musical factors. There are two main results. The first one is that musical variables were manipulated in the same direction as reported in previous research on emotional expressive music performance. The second one is the identification for each of the five emotions the mean values and ranges of the five musical variables tempo, sound level, articulation, register, and instrument. These values resulted to be independent from the particular score and its emotion. The results presented in this study therefore allow for both the design and control of emotionally expressive computerized musical stimuli that are more ecologically valid than stimuli without performance variations.
  •  
21.
  •  
22.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Evaluation of computer systems for expressive music performance
  • 2013
  • Ingår i: Guide to Computing for Expressive Music Performance. - London : Springer. - 9781447141235 - 9781447141228 ; , s. 181-203
  • Bokkapitel (refereegranskat)abstract
    • In this chapter, we review and summarize different methods for the evaluation of CSEMPs. The main categories of evaluation methods are (1) comparisons with measurements from real performances, (2) listening experiments, and (3) production experiments. Listening experiments can be of different types. For example, in some experiments, subjects may be asked to rate a particular expressive characteristic (such as the emotion conveyed or the overall expression) or to rate the effect of a particular acoustic cue. In production experiments, subjects actively manipulate system parameters to achieve a target performance. Measures for estimating the difference between performances are discussed in relation to the objectives of the model and the objectives of the evaluation. There will be also a section with a presentation and discussion of the Rencon (Performance Rendering Contest). Rencon is a contest for comparing the expressive musical performances of the same score generated by different CSEMPs. Practical examples from previous works are presented, commented on, and analysed.
  •  
23.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Expressive musical icons
  • 2001
  • Ingår i: Proceedings of the International Conference on Auditory Display - ICAD 2001. ; , s. 141-143
  • Konferensbidrag (refereegranskat)abstract
    • Recent research on the analysis and synthesis of music performance has resulted in tools for the control of the expressive content in automatic music performance [1]. These results can be relevant for applications other than performance of music by a computer. In this work it is presented how the techniques for enhancing the expressive character in music performance can be used also in the design of sound logos, in the control of synthesis algorithms, and for achieving better ringing tones in mobile phones. 
  •  
24.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Expressive sonification of footstep sounds
  • 2010
  • Ingår i: Proceedings of ISon 2010. - Stockholm, Sweden : KTH Royal Institute of Technology. ; , s. 51-54
  • Konferensbidrag (refereegranskat)abstract
    • In this study we present the evaluation of a model for the interactive sonification of footsteps. The sonification is achieved by means of specially designed sensored-shoes which control the expressive parameters of novel sound synthesis models capable of reproducing continuous auditory feedback for walking. In a previousstudy, sounds corresponding to different grounds were associated to different emotions and gender. In this study, we used an interactive sonification actuated by the sensored-shoes for providing auditory feedback to walkers. In an experiment we asked subjects to walk (using the sensored-shoes) with four different emotional intentions (happy, sad, aggressive, tender) and for each emotion we manipulated the ground texture sound four times (wood panels, linoleum, muddy ground, and iced snow). Preliminary results show that walkers used a more active walking style (faster pace) when the sound of the walking surface was characterized by an higher spectral centroid (e.g. iced snow), and a less active style (slower pace) when the spectral centroid was low (e.g. muddy ground). Harder texture sounds lead to more aggressive walking patters while softer ones to more tender and sad walking styles.
  •  
25.
  •  
26.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Interactive sonification
  • 2012
  • Ingår i: Journal on Multimodal User Interfaces. - : Springer Science and Business Media LLC. - 1783-7677 .- 1783-8738. ; 5:3-4, s. 85-86
  • Tidskriftsartikel (refereegranskat)abstract
    • In October 2010, Roberto Bresin, Thomas Hermann and Andy Hunt launched a call for papers for a special issue on Interactive Sonification of the Journal on Multimodal User Interfaces (JMUI). The call was published in eight major mailing lists in the field of Sound and Music Computing and on related websites. Twenty manuscripts were submit- ted for review, and eleven of them have been accepted for publication after further improvements. Three of the papers are further developments of works presented at ISon 2010— Interactive Sonification workshop. Most of the papers went through a three-stage review process.The papers give an interesting overview of the field of Interactive Sonification as it is today. Their topics include the sonification of data exploration and of motion, a new sound synthesis model suitable for interactive sonification applications, a study on perception in the everyday periphery of attention, and the proposal of a conceptual framework for interactive sonification. 
  •  
27.
  •  
28.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Looking for the soundscape of the future : preliminary results applying the design fiction method
  • 2020
  • Ingår i: Sound and Music Computing Conference 2020.
  • Konferensbidrag (refereegranskat)abstract
    • The work presented in this paper is a preliminary study in a larger project that aims to design the sound of the future through our understanding of the soundscapes of the present, and through methods of documentary filmmaking, sound computing and HCI. This work is part of a project that will complement and run parallel to Erik Gandini’s research project ”The Future through the Present”, which explores how a documentary narrative can create a projection into the future, and develop a cinematic documentary aesthetics that releases documentary film from the constraints of dealing with the present or the past. The point of departure is our relationship to labour at a time when Robotics, VR/AR and AI applied to Big Data outweigh and augment our physical and cognitive capabilities, with automation expected to replace humans on a large scale within most professional fields. From an existential perspective this poses the question: what will we do when we don’t have to work? And challenges us to formulate a new idea of work beyond its historical role. If the concept of work ethics changes, how would that redefine soundscapes? Will new sounds develop? Will sounds from the past resurface? In the context of this paper we try to tackle these questions by first applying the Design Fiction method. In a workshop with twenty-three participants predicted both positive and negative future scenarios, including both lo-fi and hi-fi soundscapes, and in which people will be able to control and personalize soundscapes. Results are presented, summarized and discussed.
  •  
29.
  •  
30.
  •  
31.
  • Bresin, Roberto (författare)
  • Real-time visualization of musical expression
  • 2004
  • Ingår i: Proceedings of Network of Excellence HUMAINE Workshop "From Signals to Signs of Emotion and Vice Versa". ; , s. 19-23
  • Konferensbidrag (refereegranskat)abstract
    • A system for real-time feedback of expressive music performance is presented.The feedback is provided by using a graphical interface where acoustic cues arepresented in an intuitive fashion. The graphical interface presents on the computerscreen a three-dimensional object with continuously changing shape, size,position, and colour. Some of the acoustic cues were associated with the shape ofthe object, others with its position. For instance, articulation was associated withshape, staccato corresponded to an angular shape and legato to a rounded shape.The emotional expression resulting from the combination of cues was mapped interms of the colour of the object (e.g., sadness/blue). To determine which colourswere most suitable for respective emotion, a test was run. Subjects rated how welleach of 8 colours corresponds to each of 12 music performances expressingdifferent emotions.
  •  
32.
  •  
33.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Robust Non-Verbal Expression in Humanoid Robots: New Methods for Augmenting Expressive Movements with Sound
  • 2021
  • Konferensbidrag (refereegranskat)abstract
    • The aim of the SONAO project is to establish new methods basedon sonification of expressive movements for achieving a robust interaction between users and humanoid robots. We want to achievethis by combining competences of the research team members inthe fields of social robotics, sound and music computing, affective computing, and body motion analysis. We want to engineersound models for implementing effective mappings between stylized body movements and sound parameters that will enable anagent to express high-level body motion qualities through sound.These mappings are paramount for supporting feedback to andunderstanding robot body motion. The project will result in thedevelopment of new theories, guidelines, models, and tools forthe sonic representation of high-level body motion qualities in interactive applications. This work is part of the growing researchfield known as data sonification, in which we combine methodsand knowledge from the fields of interactive sonification, embodied cognition, multisensory perception, non-verbal and gesturalcommunication in robots.
  •  
34.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Rule-based emotional colouring of music performance
  • 2000
  • Ingår i: Proceedings of the International Computer Music Conference - ICMC 2000. - San Francisco : ICMA. ; , s. 364-367
  • Konferensbidrag (refereegranskat)
  •  
35.
  • Bresin, Roberto, et al. (författare)
  • Software tools for musical expression.
  • 2000
  • Ingår i: Proceedings of the InternationalComputer Music Conference 2000. - San Francisco, USA : Computer Music Association. ; , s. 499-502
  • Konferensbidrag (refereegranskat)
  •  
36.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Software Tools for Musical Expression
  • 2000
  • Ingår i: Proceedings of the 2000 International Computer Music Conference, ICMC 2000. - : International Computer Music Association.
  • Konferensbidrag (refereegranskat)abstract
    • In this article software tools that model principles used in music performance are presented. They all add expressive variations to a score-based representation of the music. The two main tools are Director Musices, a Lisp program, and Japer, a Java applet. Some possible applications are illustrated: music production, teaching of music performance, and performance of mobile phone ringing tones.
  •  
37.
  • Bresin, Roberto, 1963-, et al. (författare)
  • Sonification of the self vs. sonification of the other : Differences in the sonification of performed vs. observed simple hand movements
  • 2020
  • Ingår i: International journal of human-computer studies. - : Elsevier BV. - 1071-5819 .- 1095-9300. ; 144
  • Tidskriftsartikel (refereegranskat)abstract
    • Existing works on interactive sonification of movements, i.e., the translation of human movement qualities from the physical to the auditory domain, usually adopt a predetermined approach: the way in which movement features modulate the characteristics of sound is fixed. In our work we want to go one step further and demonstrate that the user role can influence the tuning of the mapping between movement cues and sound parameters. Here, we aim to verify if and how the mapping changes when the user is either the performer or the observer of a series of body movements (tracing a square or an infinite shape with the hand in the air). We asked participants to tune movement sonification while they were directly performing the sonified movement vs. while watching another person performing the movement and listening to its sonification. Results show that the tuning of the sonification chosen by participants is influenced by three variables: role of the user (performer vs observer), movement quality (the amount of Smoothness and Directness in the movement), and physical parameters of the movements (velocity and acceleration). Performers focused more on the quality of their movement, while observers focused more on the sonic rendering, making it more expressive and more connected to low-level physical features.
  •  
38.
  • Bresin, Roberto, et al. (författare)
  • Sound and Music Computing at KTH
  • 2012
  • Ingår i: Trita-TMH. - Stockholm : KTH Royal Institute of Technology. - 1104-5787. ; 52:1, s. 33-35
  • Tidskriftsartikel (övrigt vetenskapligt/konstnärligt)abstract
    • The SMC Sound and Music Computing group at KTH (formerly the Music Acoustics group) is part of the Department of Speech Music and Hearing, School of Computer Science and Communication. In this short report we present the current status of the group mainly focusing on its research.
  •  
39.
  • Bresin, Roberto, et al. (författare)
  • Sound and Music Computing at KTH
  • 2012
  • Ingår i: SMC Sweden 2012 Sound and Music Computing, Understanding and Practicing in Sweden. - Stockholm : Department of Speech, Music and Hearing, Royal Institute of Technology. ; , s. 33-35
  • Konferensbidrag (refereegranskat)
  •  
40.
  • Bresin, Roberto, 1963-, et al. (författare)
  • SOUND FOREST/LJUDSKOGEN: A LARGE-SCALE STRING-BASED INTERACTIVE MUSICAL INSTRUMENT
  • 2016
  • Ingår i: Sound and Music Computing 2016. - : SMC Sound&Music Computing NETWORK. - 9783000537004 ; , s. 79-84
  • Konferensbidrag (refereegranskat)abstract
    •  In this paper we present a string-based, interactive, largescale installation for a new museum dedicated to performing arts, Scenkonstmuseet, which will be inaugurated in 2017 in Stockholm, Sweden. The installation will occupy an entire room that measures 10x5 meters. We aim to create a digital musical instrument (DMI) that facilitates intuitive musical interaction, thereby enabling visitors to quickly start creating music either alone or together. The interface should be able to serve as a pedagogical tool; visitors should be able to learn about concepts related to music and music making by interacting with the DMI. Since the lifespan of the installation will be approximately five years, one main concern is to create an experience that will encourage visitors to return to the museum for continued instrument exploration. In other words, the DMI should be designed to facilitate long-term engagement. Finally, an important aspect in the design of the installation is that the DMI should be accessible and provide a rich experience for all museum visitors, regardless of age or abilities.
  •  
41.
  • Bresin, Roberto, et al. (författare)
  • The Radio Baton as configurable musical instrument and controller
  • 2003
  • Ingår i: Proc. Stockholm Music Acoustics Conference. ; , s. 689-691
  • Konferensbidrag (refereegranskat)abstract
    • The Max Mathews radio baton (RB) has been produced in about 40 pieces until today. It has usually been applied as an orchestra conducting system, as interactive music composition controller using typical percussionist gestures, and as a controller for sound synthesis models. In the framework of the Sounding Object EU founded project, the RB has found new applications scenarios. Three applications were based on this controller. This was achieved by changing the gesture controls. Instead of the default batons, a new radio sender that fits the fingertips was developed. This new radio sender allows musicians’ interaction based on hand gestures and it can also fit different devices. A Pd model of DJ scratching techniques (submitted to SMAC03) was controlled with the RB and the fingertip radio sender. This controller allows DJs a direct control of sampled sounds maintaining hand gestures similar to those used on vinyl. The sound model of a bodhran (submitted to SMAC03) was controlled with a traditional playing approach. The RB was controlled with a traditional bodhran double beater with one fingertip radio sender at each end. This allowed detection of the beater position on the RB surface, the surfaced corresponding to the membrane in the sound model. In a third application the fingertip controller was used to move a virtual ball rolling along the elastic surface of a box placed over the surface of the RB. The DJ console and the virtual bodhran were played in concerts.
  •  
42.
  • Bresin, Roberto, et al. (författare)
  • Toward a new model for sound control
  • 2001
  • Ingår i: Proceedings of the COST G-6 Conference on Digital Audio Effects (DAFX-01), Limerick, Ireland, December 6-8, 200. ; , s. 45-49
  • Konferensbidrag (refereegranskat)abstract
    • The control of sound synthesis is a well-known problem. This is particularly true if the sounds are generated with physical modeling techniques that typically need specification of numerous control parameters. In the present work outcomes from studies on automatic music performance are used for tackling this problem. 
  •  
43.
  • Bresin, Roberto (författare)
  • Virtual virtuosity
  • 2000
  • Doktorsavhandling (övrigt vetenskapligt/konstnärligt)abstract
    • This dissertation presents research in the field ofautomatic music performance with a special focus on piano. A system is proposed for automatic music performance, basedon artificial neural networks (ANNs). A complex,ecological-predictive ANN was designed thatlistensto the last played note,predictsthe performance of the next note,looksthree notes ahead in the score, and plays thecurrent tone. This system was able to learn a professionalpianist's performance style at the structural micro-level. In alistening test, performances by the ANN were judged clearlybetter than deadpan performances and slightly better thanperformances obtained with generative rules. The behavior of an ANN was compared with that of a symbolicrule system with respect to musical punctuation at themicro-level. The rule system mostly gave better results, butsome segmentation principles of an expert musician were onlygeneralized by the ANN. Measurements of professional pianists' performances revealedinteresting properties in the articulation of notes markedstaccatoandlegatoin the score. Performances were recorded on agrand piano connected to a computer.Staccatowas realized by a micropause of about 60% ofthe inter-onset-interval (IOI) whilelegatowas realized by keeping two keys depressedsimultaneously; the relative key overlap time was dependent ofIOI: the larger the IOI, the shorter the relative overlap. Themagnitudes of these effects changed with the pianists' coloringof their performances and with the pitch contour. Theseregularities were modeled in a set of rules for articulation inautomatic piano music performance. Emotional coloring of performances was realized by means ofmacro-rules implemented in the Director Musices performancesystem. These macro-rules are groups of rules that werecombined such that they reflected previous observations onmusical expression of specific emotions. Six emotions weresimulated. A listening test revealed that listeners were ableto recognize the intended emotional colorings. In addition, some possible future applications are discussedin the fields of automatic music performance, music education,automatic music analysis, virtual reality and soundsynthesis.
  •  
44.
  • Bresin, Roberto (författare)
  • What is the color of that music performance?
  • 2005
  • Ingår i: Proceedings of the International Computer Music Conference - ICMC 2005. - Barcelona. ; , s. 367-370
  • Konferensbidrag (refereegranskat)abstract
    • The representation of expressivity in music is still a fairlyunexplored field. Alternative ways of representing musicalinformation are necessary when providing feedback onemotion expression in music such as in real-time tools formusic education, or in the display of large music databases.One possible solution could be a graphical non-verbal representationof expressivity in music performance using coloras index of emotion. To determine which colors aremost suitable for an emotional expression, a test was run.Subjects rated how well each of 8 colors and their 3 nuancescorresponds to each of 12 music performances expressingdifferent emotions. Performances were playedby professional musicians with 3 instruments, saxophone,guitar, and piano. Results show that subjects associateddifferent hues to different emotions. Also, dark colorswere associated to music in minor tonality and light colorsto music in major tonality. Correspondence betweenspectrum energy and color hue are preliminary discussed.
  •  
45.
  • Burger, Birgitta, et al. (författare)
  • Communication of Musical Expression by Means of Mobile Robot Gestures
  • 2010
  • Ingår i: Journal on Multimodal User Interfaces. - Stockholm : Springer Science and Business Media LLC. - 1783-7677 .- 1783-8738. ; 3:1, s. 109-118
  • Tidskriftsartikel (refereegranskat)abstract
    • We developed a robotic system that can behave in an emotional way. A 3-wheeled simple robot with limited degrees of freedom was designed. Our goal was to make the robot displaying emotions in music performance by performing expressive movements. These movements have been compiled and programmed based on literature about emotion in music, musicians’ movements in expressive performances, and object shapes that convey different emotional intentions. The emotions happiness, anger, and sadness have been implemented in this way. General results from behavioral experiments show that emotional intentions can be synthesized, displayed and communicated by an artificial creature, also in constrained circumstances.
  •  
46.
  •  
47.
  •  
48.
  • Camurri, Antonio, et al. (författare)
  • User-centric context-aware mobile applications for embodied music listening
  • 2009
  • Ingår i: User Centric Media. - Heidelberg : Springer Berlin. - 9783642126307 ; , s. 21-30
  • Bokkapitel (refereegranskat)abstract
    • This paper surveys a collection of sample applications for networked user-centric context-aware embodied music listening. The applications have been designed and developed in the framework of the EU-ICT Project SAME (www.sameproject.eu) and have been presented at Agora Festival (IRCAM, Paris, France) in June 2009. All of them address in different ways the concept of embodied, active listening to music, i.e., enabling listeners to interactively operate in real-time on the music content by means of their movements and gestures as captured by mobile devices. In the occasion of the Agora Festival the applications have also been evaluated by both expert and non-expert users
  •  
49.
  • Castellano, Ginevra, et al. (författare)
  • Expressive Control of Music and Visual Media by Full-Body Movement
  • 2007
  • Ingår i: Proceedings of the 7th International Conference on New Interfaces for Musical Expression, NIME '07. - New York, NY, USA : ACM Press. ; , s. 390-391
  • Konferensbidrag (refereegranskat)abstract
    • In this paper we describe a system which allows users to use their full-body for controlling in real-time the generation of an expressive audio-visual feedback. The system extracts expressive motion features from the user’s full-body movements and gestures. The values of these motion features are mapped both onto acoustic parameters for the real-time expressive rendering ofa piece of music, and onto real-time generated visual feedback projected on a screen in front of the user.
  •  
50.
  • Castellano, Ginevra, et al. (författare)
  • User-Centered Control of Audio and Visual Expressive Feedback by Full-Body Movements
  • 2007
  • Ingår i: Affective Computing and Intelligent Interaction. - Berlin / Heidelberg : Springer Berlin/Heidelberg. - 9783540748885 ; , s. 501-510
  • Bokkapitel (refereegranskat)abstract
    • In this paper we describe a system allowing users to express themselves through their full-body movement and gesture and to control in real-time the generation of an audio-visual feedback. The systems analyses in real-time the user’s full-body movement and gesture, extracts expressive motion features and maps the values of the expressive motion features onto real-time control of acoustic parameters for rendering a music performance. At the same time, a visual feedback generated in real-time is projected on a screen in front of the users with their coloured silhouette, depending on the emotion their movement communicates. Human movement analysis and visual feedback generation were done with the EyesWeb software platform and the music performance rendering with pDM. Evaluation tests were done with human participants to test the usability of the interface and the effectiveness of the design.
  •  
Skapa referenser, mejla, bekava och länka
  • Resultat 1-50 av 180
Typ av publikation
konferensbidrag (92)
tidskriftsartikel (53)
bokkapitel (18)
doktorsavhandling (10)
proceedings (redaktörskap) (3)
konstnärligt arbete (2)
visa fler...
annan publikation (2)
forskningsöversikt (1)
licentiatavhandling (1)
visa färre...
Typ av innehåll
refereegranskat (161)
övrigt vetenskapligt/konstnärligt (19)
Författare/redaktör
Bresin, Roberto (91)
Bresin, Roberto, 196 ... (82)
Friberg, Anders (28)
Hansen, Kjetil Falke ... (19)
Elblaus, Ludvig, 198 ... (14)
Dubus, Gaël (13)
visa fler...
Frid, Emma (12)
Latupeirissa, Adrian ... (10)
Fabiani, Marco (9)
Frid, Emma, 1988- (6)
Panariello, Claudio (6)
Sundberg, Johan (5)
Sallnäs Pysander, Ev ... (5)
De Poli, Giovanni (5)
Camurri, Antonio (5)
Leite, Iolanda (4)
Pauletto, Sandra (4)
Rocchesso, Davide (4)
Dahl, Sofia (4)
Moll, Jonas, 1982- (4)
Volpe, Gualtiero (4)
Holzapfel, Andre (3)
Falkenberg, Kjetil, ... (3)
Battel, Giovanni Umb ... (3)
Dahl, S. (3)
Falkenberg Hansen, K ... (3)
Pauletto, Sandra, As ... (3)
Mancini, Maurizio (3)
Favero, Federico (3)
Castellano, Ginevra (2)
Dravins, Christina (2)
Hansen, Kjetil Falke ... (2)
Laaksolahti, Jarmo (2)
Ternström, Sten (2)
Askenfelt, Anders (2)
Vidolin, Alvise (2)
Battel, G. U. (2)
Rönnberg, Niklas, 19 ... (2)
Fontana, Federico (2)
Papetti, Stefano (2)
Visell, Yon (2)
de Witt, Anna (2)
Widmer, Gerhard (2)
Friberg, Anders, Pro ... (2)
Lewandowski, Vincent (2)
Tsaknaki, Vasiliki (2)
Burger, Birgitta (2)
Bevilacqua, F. (2)
Välimäki, V. (2)
Kleimola, Jari (2)
visa färre...
Lärosäte
Kungliga Tekniska Högskolan (175)
Södertörns högskola (13)
Uppsala universitet (7)
Örebro universitet (4)
Stockholms universitet (2)
Linköpings universitet (2)
visa fler...
Kungl. Musikhögskolan (2)
Chalmers tekniska högskola (1)
RISE (1)
Stockholms konstnärliga högskola (1)
visa färre...
Språk
Engelska (179)
Svenska (1)
Forskningsämne (UKÄ/SCB)
Naturvetenskap (145)
Humaniora (57)
Teknik (56)
Samhällsvetenskap (47)
Medicin och hälsovetenskap (6)
Lantbruksvetenskap (1)

År

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Stäng

Kopiera och spara länken för att återkomma till aktuell vy